Back to Blog

Kubernetes Deployment Explained for Development Teams

By Brian Galura -

Table of contents

Introduction

What is a Kubernetes Deployment?

Why you need a Deployment in Kubernetes

How to create a Kubernetes Deployment

Effective application management strategies for rapid deployments

How to manage multi-cloud deployments using Convox

Introduction

If you’re running applications on a Kubernetes cluster, you can’t avoid using a deployment.

A Kubernetes deployment is what enables you to roll out, roll back and scale various versions of your applications. In other words, a Kubernetes deployment is a resource object used for releasing declarative app updates. You specify your desired app state through deployment, and the deployment controller takes action to change the current state to the desired state.

For example, you can specify how many duplicates of a Pod you want running and the deployment makes this happen.

Acting as a means for Pod performance management, Kubernetes (or K8s) deployment can speed up application launches and updates with no downtime. Software development teams and managers can use deployments for auto-rolling updates to prevent downtime and the challenges that manual updates are notorious for.

The rest of this article will delve into Kubernetes deployments in detail so that you can learn how to use them smoothly. We’ll also show you a more efficient way to manage your Kubernetes clusters (and multi-cluster deployments) so that you can use your time more efficiently.

What is a Kubernetes Deployment?

A Kubernetes deployment is the vehicle that you use to provide declarative updates to your containerized applications. These updates are sent to the Pods and ReplicaSets using manifestation files, i.e., YAML. On receiving this input, the current state of the application is set to “declared” by the controller.

To understand the above definition better, let us individually ‘unpack’ the three terms used above: Pod, ReplicaSet, and Deployment.

Pods

Within a K8s cluster, we cannot run the containers directly. So, the Docker image needs a layer of abstraction over the container, and this is where the Pod comes in. In other words, Pods are used to encapsulate your containers so that you can perform deployments efficiently.

ReplicaSet

If your application requires more than one pod, grouping is performed by ReplicaSet. This component of Kubernetes enables you to create various instances for each Pod, and this helps with application scaling.

Deployment

Alright. So, you’ve got a Pod when you need to deploy an app and a ReplicaSet when you need to deploy it. Why do we need a Kubernetes Deployment then?

Valid question. Let’s explain.

Deployment adds flexibility to your Kubernetes cluster and eventually, to your application. Whether you’re rolling an update or rollbacking one, using the deployment helps you handle Pod instances better. So, keeping a Pod(s) inside the ReplicaSet inside the Deployment makes for a highly effective hierarchy layer in Kubernetes.

Here’s an image to help you understand this concept better:

What is a Kubernetes Deployment

Why use Kubernetes Deployment?

In the DevOps universe, efficiency is key. And manually updating Pods within containerized apps is not a wise use of developer time. Imagine upgrading services, stopping old versions, waiting for new launches, verifying versions’ validity, rolling changes, rolling back the changes when a failure occurs by your hands. Tiring, isn’t it?

Added to this is your potential to rack up errors, reduce productivity, build up idle time for your resources, and many other issues.

Deployments solve this problem by enabling you to use repeatable pre-designed patterns (or workflows) for automating the process.

You can use a K8s deployment or cluster object to specify your desired outcome(s). With this knowledge, the system emphasizes maintaining the “desired” state for your deployment. The pod status, availability, pausing, rolling, rollbacks, versioning, and management is better in this case (by the server-side itself).

Using Deployment in Kubernetes, you’re better equipped to leverage the following benefits:

  • Instance updates are faster and seamless because instances are updated one by one, using the rolling concept.
  • Better control over situations. This means that in the event of an error or problem, rollbacking to the previous working version can be done seamlessly.
  • You can stop an update process for your Pods or Replica Set when using K8s deployment.
  • You may have multiple instances for your Pod. So, application scaling is smoother.

How to Create Kubernetes Deployment

Now that we’ve answered the question: “what is a Kubernetes deployment?” - let’s move on to how it functions. In this section, you will learn how to create and use deployments conveniently.

Creating a Kubernetes Deployment

The first step to creating a Kubernetes deployment is creating a YAML file for this deployment. The structure of this object’s information is similar to ReplicaSet, but its type should be “Deployment”.

Here’s an example deployment object named “deployment-explainer”:

File Name: deployment-explainer.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-explainer # It is the name of your deployment
spec:
  replicas: 5 # Total number of copies
  selector: # For 5 replicas, your cluster must have 5 pods with the same label running
    matchLabels:
      app: webserver
      version: v101
  template: #
    metadata:
      labels:
        app: webserver
        version: v101
    spec:
      containers:
      - name: explainer
      image: explainer/latest # Path of the image goes here
      ports:
      - containerPort: 80

Once you have created the YAML file, it is time to execute it. Use the following command to do so:

$ kubectl apply -f deployment-explainer.yaml --record

Here are a few more useful commands to help you manage your Kubernetes deployment smoothly:

  1. To check if your deployment was created:
$ kubectl get deployments

Or, use the following command to see a detailed description of this deployment:

$ kubectl describe deployment deployment-explainer
  1. As a deployment will create a ReplicaSet, check the same too:
$ kubectl get replicaset
  1. After creating the replica set, a deployment should create pods. To verify pods created and their status:
$ kubectl get pods

// In our case (above example), it should create 5 replicas of the pod.

  1. Use the following command to delete a deployment:
$ kubectl delete -f deployment-explainer.yaml
  1. Use the following command to delete a deployment:
$ kubectl delete -f deployment-explainer.yaml
  1. Get running services using the below command:
$ kubectl get services
  1. You may set the image or update the YAML file to perform a rolling update.
$ kubectl set image deployment deployment-explainer explainer=explainer:2.0

OR

$ kubectl edit deployment deployment-explainer
  1. The rollout status can be checked using:
$ kubectl rollout status deployment deployment-explainer explainer=explainer:2.0

Application Management Strategies using Kubernetes Deployment

Some features might demand thorough and detailed testing while some UX/UI changes might just need quick updates. Depending on what your development goals are, you can employ various K8s deployment strategies. Two popular strategies are:

  1. Rolling Update Strategy

This is the default Kubernetes deployment strategy that enables you to replace pods in a controlled way.

The rolling update strategy does not just stage the application launch. It also optimizes the Pods count by maintaining the minimum number of pods in the deployment. However, for short time periods, this strategy could create 2 versions of the same pod run for your application - creating problems for the service consumers.

The strategy ensures that at least 25% of pods are in the desired state and not more than 25% of total pods are made unavailable. This implies that the rolling update won’t cause any downtime if your application is architected for good fault tolerance.

  1. Recreate Strategy

With this strategy, all previous pods are deleted, and new ones are created. New containers are executed after the complete termination of the old ones. While this might cause downtime, it also means that there won’t be incorrect handling of service consumers' requests.

When old containers are stopped and new versions are being configured, there are no active containers for your applications. So, requests by service consumers won’t be processed during this time.

Using workflows for deployment in Convox

Let’s first tell you how it is done in K8s:

Kubernetes lets you release containerized applications independent of the platform. For example, you can add AKS, Google GKE, and Amazon EKS clusters - all to your pipeline. This means that with Kubernetes, multi-cloud development and deployments are possible.

For this, you will usually need to create an environment, set up Kubernetes resources, and create K8s jobs. Then, you will need to create the YAML file for parallel deployment across multiple clouds using Kubernetes namespaces. Running these commands and preparing a lengthy YAML isn’t all. You still have to take care of various background and prerequisite activities.

With these required tasks, managing any new deployment (especially multi-cloud deployments) can be complex and time-consuming.

But we won’t suggest you to do it in this manner, because there’s a better way.

Convox acts as a Kubernetes deployment controller, doing the heavy-lifting for your team by helping you to access and manage K8s directly in a few clicks. It is achieved through Deployment Workflows.

A Deployment Workflow lets you handle the staging and production tasks related to your regular applications easily. Whenever you will add code to a particular repository or branch on Github/Gitlab, it will get triggered and update your application deployed in Convox. If required, you can trigger workflow manually, instead of on merges.

Creating Deployment Workflows is simple:

Convox Workflows

With Convox, handling Kubernetes deployments and managing your Kubernetes cluster is a smooth operation. By automating the complex, lengthy parts of the deployment process, Convox empowers developers and development team managers to deploy their applications easily.

Convox provides an API proxy that is accessible from your Convox racks. With Convox’s easy-to-use interface, problems with the management of Kubernetes credentials have become a thing of the past.

The detailed how-to video shows how you can easily manage multiple deployments with Convox:

Manage multi-cloud K8s deployments with CircleCI and Convox

Convox and CircleCI

The Final Word

Kubernetes object deployment features are enabled on clusters for faster development, deployment, and scaling. However, single and multi-cloud environment users might find the complex deployment process discouraging and difficult in practical terms. Acting as a deployment controller, Convox is the perfect solution for these users and their development teams to speed up their production processes and create more robust containerized applications.