How To Migrate a Docker Compose Workflow to Kubernetes

Photo of author

By admin

If you want to deploy and scale a contemporary, stateless application on distributed platforms, you should containerize its components first. Docker Compose can help you modernize and containerize your application during its development. Plus, you can also write service definitions to explain how to run your container images.

Now, if you want to run your services on Kubernetes, you have to migrate your Docker Compose workflow to it. By doing so, you can scale your applications with ease. For such purposes, you can take the help of Kompose. It is a great tool that helps to boost the migration process and save time. In this article, we will discuss how to transfer your Docker Compose workflow to platforms like Kubernetes using Kompose.

What is Kompose?

Kompose is a tool that helps users shift from Docker Compose to Kubernetes. Its name Kompose is a blend of Kubernetes and Compose. The function of this tool is simple; it helps transform a Docker Compose file into Kubernetes resources. Its simplicity and efficiency have made it one of the most popular conversion tools among app developers.

By using Kompose, users can simplify the translation process and the deployment of their containers to a production cluster. Also, merely using a command like kompose, you can convert your Docker Compose file.

Kubernetes Overview

Kubernetes is one of the most popular open-source container orchestrators in the world. It is also known as k8s and used by several individuals as well as organizations. You might know that the process of deploying, managing, and scaling containerized apps manually takes a considerable amount of time. That’s why using a platform like Kubernetes can come in handy. By using Kubernetes, you can automate all these tasks with ease. Hence, the task of managing clusters becomes a lot easier and more convenient if you use Kubernetes.

Kubernetes was first developed by Google and soon its popularity took a hike. Currently, it is an open-source platform maintained by Cloud Native Computing Foundation (CNCF). Kubernetes offers several benefits to users that you should know and are discussed as follows:

Benefits of Kubernetes

  • Kubernetes helps users to orchestrate containers across numerous hosts.
  • The tool automates many processes that otherwise need to be performed manually during the development of an app.
  • Many container orchestration platforms only allow you to scale your resources vertically. But Kubernetes allows you to scale your resources not only vertically but also horizontally.
  • Being open-source, Kubernetes gives you greater freedom and flexibility than many other similar platforms. You can use this tool anywhere you go. No matter what the environment of your cloud is; public, on-premises, or hybrid, you can utilize Kubernetes.
  • One of its biggest benefits is its rollback feature which acts like an eraser. Imagine you somehow make a mistake while making a change to your app and it ends up breaking the whole app. It could make all of your efforts go down the drain. Thankfully, the rollback feature of Kubernetes can help you reverse that mistake and start anew.
  • Every app developer knows the importance of balancing containers. With Kubernetes, you don’t have to worry about where to place your containers anymore as it knows what’s best. It calculates the best position for your container and balances them in a pretty neat manner.
  • Another great advantage of Kubernetes is its self-monitoring feature. Kubernetes keeps a constant watch on the health of your nods and containers. Thus, it ensures that they are always in tip-top condition.
  • Besides the above benefits, Kubernetes can manage several clusters at once.

Simple Steps to Migrate a Docker Compose Workflow to Kubernetes

Before we discuss the steps that allow you to migrate your Docker Compose workflow, you need to take a look at the essential requirements that you must satisfy.

Prerequisites

  • An RBAC-enabled Kubernetes 1.10+ cluster.
  • Your local machine or development server must have the kubectl command-line tool installed on it.
  • You must have a Docker Hub account and the Docker software installed on your local machine or development server.

Step 1: Installing Kompose

First, go to the GitHub Releases page and copy-paste the following into the curl command to download Kompose (v1.18.0):

curl -L https://github.com/kubernetes/kompose/releases/download/v1.18.0/kompose-linux-amd64 -o kompose

Next, make the binary executable:

chmod +x compose

After that, move the binary to your path:

sudo mv ./kompose /usr/local/bin/compose

To check the version of the software by doing the following:

kompose version

If you installed the right version, you will get the following output:

1.18.0 (06a2e56)

Now that you have successfully installed kompose, you can proceed further to clone the application.

Step 2: Cloning and Packaging the Application

You have to clone your Node.js project code and package the application to run it on Kubernetes. To do that, clone the repository into the node_project directory:

git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

The node_project directory stores files and directories for a shark information app that needs user inputs to function. This directory is optimized to properly work with containers. You can reach the node_project directory by copy-pasting the following:

cd node_project

There is a dockerfile in the node_project directory. You can find the instructions to build the application image in it. Build the image with the help of the docker build command. In such cases, you can tag the image with your Docker Hub Username by naming it node-kubernetes. Or, you can give it any name you like:

docker build -t your_dockerhub_username/node-kubernetes .

We are using the . in the command to specify that the build context is the current directory.

It usually takes 1-2 minutes to build an image. After you’re done building it, you can check your images with the help of the command below:

docker images

If your application image is ready, you will get this output:

REPOSITORY TAG IMAGE ID CREATED SIZE

your_dockerhub_username/node-kubernetes latest 9c6f897e1fbc 3 seconds ago 90MB

node 10-alpine 94f3c8956482 12 days ago 71MB

Now, log into your Docker Hub account:

docker login -u your_dockerhub_username

Logging into your Docker Hub account will create a ~/.docker/config.json file in your home directory.

Use the docker push command to send your application image to Docker Hub. However, you must replace your_dockerhub_username with your actual username:

docker push your_dockerhub_username/node-kubernetes

Step 3: Translating Compose Service to Kubernetes Objects with Kompose

Next, you have to translate your compose service definitions to Kubernetes Objects. The Docker Compose file namely docker-compose.yaml helps to determine the definitions to run your services with Compose. You can use Kompose to create yaml files and migrate the service definitions to Kubernetes objects. Here’s how you can do it.

First, you need to change some of your Docker Compose file definitions to make them compatible with Kubernetes. You need to modify the restart policies of both containers so that they work better with Kubernetes.

Open the Compose file with nano or any similar editor:

nano docker-compose.yaml

In its usual state, the service definition for Node.js is:

~/node_project/docker-compose.yaml

services:

nodejs:

build:

context: .

dockerfile: Dockerfile

image: nodejs

container_name: nodejs

restart: unless-stopped

env_file: .env

environment:

– MONGO_USERNAME=$MONGO_USERNAME

– MONGO_PASSWORD=$MONGO_PASSWORD

– MONGO_HOSTNAME=db

– MONGO_PORT=$MONGO_PORT

– MONGO_DB=$MONGO_DB

ports:

– “80:8080”

volumes:

– .:/home/node/app

– node_modules:/home/node/app/node_modules

networks:

– app-network

command: ./wait-for.sh db:27017 — /home/node/app/node_modules/.bin/nodemon app.js

Now, you need to do some modifications to this definition:

  • Replace the local dockerfile with your node-kubernetes image.
  • Make the restart policy always instead of unless stopped.
  • Get rid of the command instruction and volumes list.

After you’ve done all this, the new service definition will look like the following:

~/node_project/docker-compose.yaml

services:

nodejs:

image: your_dockerhub_username/node-kubernetes

container_name: nodejs

restart: always

env_file: .env

environment:

– MONGO_USERNAME=$MONGO_USERNAME

– MONGO_PASSWORD=$MONGO_PASSWORD

– MONGO_HOSTNAME=db

– MONGO_PORT=$MONGO_PORT

– MONGO_DB=$MONGO_DB

ports:

– “80:8080”

networks:

– app-network

Next, set the restart policy to always. Delete the .env file. Instead, use Secret and pass the values for your MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD to the database container.

Here’s how the db service will look like now:

~/node_project/docker-compose.yaml

db:

image: mongo:4.1.8-xenial

container_name: db

restart: always

environment:

– MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME

– MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD

volumes:

– dbdata:/data/db

networks:

– app-network

Navigate to the bottom of the file and delete the node_modules volumes from the top-level volumes key. After doing so, it will look like this:

~/node_project/docker-compose.yaml

volumes:

dbdata:

Save the file and close it.

 

Before you can begin to migrate your service definitions, you must create the .env file. It will help Kompose create the ConfigMap with your information (non-sensitive).

Create the .env file:

nano.env

Next, add the following port and database name info to the file you’ve just created and close it after saving.

~/node_project/.env

MONGO_PORT=27017

MONGO_DB=sharkinfo

After that, convert your definitions to yaml files:

kompose convert

Running this command will give you the following output:

INFO Kubernetes file “nodejs-service.yaml” created

INFO Kubernetes file “db-deployment.yaml” created

INFO Kubernetes file “dbdata-persistentvolumeclaim.yaml” created

INFO Kubernetes file “nodejs-deployment.yaml” created

INFO Kubernetes file “nodejs-env-configmap.yaml” created

After you’re done with it all, proceed with the next step.

Step 4: Creating Kubernetes Secrets

Now, you need to make some modifications to the files created by Kompose to make your application work properly. The first thing you need to do is to create a Secret for your database username and password. There are two types of environment variables while you’re working on Kubernetes: ConfigMap and Secret. The simple, non-classified info you gave away while creating your .env file contributed to the creation of a ConfigMap. So, now you only need to create a Secret and you’re good to go.

To create a Secret, you must convert your username and password to base64. Converting your username and password to base64 will help you transfer your data in a consistent manner. Here’s the command to convert your username:

echo -n ‘your_database_username’ | base64

 

Write down the output value and convert your password:

echo -n ‘your_database_password’ | base64

Write down the output value here as well.

 

Use the command below to open a file for Secret:

nano secret.yaml

In case you want to check the formatting of your yaml files, use a linter. Or, validate your syntax using kubectl:

kubectl create -f your_yaml_file.yaml –dry-run –validate=true

Now, create a Secret to define your MONGO_USERNAME and MONGO_PASSWORD using the following code:

~/node_project/secret.yaml

apiVersion: v1

kind: Secret

metadata:

name: mongo-secret

data:

MONGO_USERNAME: your_encoded_username

MONGO_PASSWORD: your_encoded_password

Don’t forget to write your actual username and password instead of the model ones. After you’re done doing so, save and close it. Just like the .env file, add secret.yaml to your .gitignore file here too. Also, note that we have chosen mongo-secret as the name of our Secret object. You, however, can replace it with any name you want.

Next, you have to add references to the Secret to your application deployment.

Open the following file:

nano nodejs-deployment.yaml

The nodejs-deployment.yaml file’s container specs include environment variables mentioned below under the env key:

~/node_project/nodejs-deployment.yaml

apiVersion: extensions/v1beta1

kind: Deployment

spec:

containers:

– env:

– name: MONGO_DB

valueFrom:

configMapKeyRef:

key: MONGO_DB

name: nodejs-env

– name: MONGO_HOSTNAME

value: db

– name: MONGO_PASSWORD

– name: MONGO_PORT

valueFrom:

configMapKeyRef:

key: MONGO_PORT

name: nodejs-env

– name: MONGO_USERNAME

After that, add these Secret references to the MONGO_USERNAME and MONGO_PASSWORD variables. Without them, your application won’t have access to these values:

~/node_project/nodejs-deployment.yaml

apiVersion: extensions/v1beta1

kind: Deployment

spec:

containers:

– env:

– name: MONGO_DB

valueFrom:

configMapKeyRef:

key: MONGO_DB

name: nodejs-env

– name: MONGO_HOSTNAME

value: db

– name: MONGO_PASSWORD

valueFrom:

secretKeyRef:

name: mongo-secret

key: MONGO_PASSWORD

– name: MONGO_PORT

valueFrom:

configMapKeyRef:

key: MONGO_PORT

name: nodejs-env

– name: MONGO_USERNAME

valueFrom:

secretKeyRef:

name: mongo-secret

key: MONGO_USERNAME

Save and close the file.

Now, add the same values to the db-deployment.yaml file. Open the file first with this command:

nano db-deployment.yaml

After opening the file, add references to the Secret values under the variables namely MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD:

~/node_project/nodejs-deployment.yaml

apiVersion: extensions/v1beta1

kind: Deployment

spec:

containers:

– env:

– name: MONGO_DB

valueFrom:

configMapKeyRef:

key: MONGO_DB

name: nodejs-env

– name: MONGO_HOSTNAME

value: db

– name: MONGO_PASSWORD

valueFrom:

secretKeyRef:

name: mongo-secret

key: MONGO_PASSWORD

– name: MONGO_PORT

valueFrom:

configMapKeyRef:

key: MONGO_PORT

name: nodejs-env

– name: MONGO_USERNAME

valueFrom:

secretKeyRef:

name: mongo-secret

key: MONGO_USERNAME

Save and close this file.

Now that you’re done creating your Secret, it’s time for you to create a database service and an Init Container. Focus on setting up your container in a way that makes it connect only to your database without any issues.

Step 5: Creating the Database Service and an Application Init Container

Creating your database Service and Init Container is one of the most important tasks in this entire process. You probably won’t want your Init Container attempts to connect to any database other than yours. So, be very careful here.

First, open a file:

nano db-service.yaml

Next, define the specifications for the database Service:

~/node_project/db-service.yaml

apiVersion: v1

kind: Service

metadata:

annotations:

kompose.cmd: kompose convert

kompose.version: 1.18.0 (06a2e56)

creationTimestamp: null

labels:

io.kompose.service: db

name: db

spec:

ports:

– port: 27017

targetPort: 27017

selector:

io.kompose.service: db

status:

loadBalancer: {}

The selector will match the Service object with database Pods, defined with the label io.kompose.service: db in the db-deployment.yaml.

Then, open the nodejs-deployment.yaml file with the following command:

nano nodejs-deployment.yaml

Now, add an initContainers field inside the Pod spec and next to the containers array. Navigate to the nodejs containers array and enter this code under the ports and resources field:

~/node_project/nodejs-deployment.yaml

apiVersion: extensions/v1beta1

kind: Deployment

spec:

containers:

name: nodejs

ports:

– containerPort: 8080

resources: {}

initContainers:

– name: init-db

image: busybox

command: [‘sh’, ‘-c’, ‘until nc -z db:27017; do echo waiting for db; sleep 2; done;’]

restartPolicy: Always

Save and close the file after editing.

Step 6: Modifying the PersistentVolumeClaim and Exposing the Application Frontend

Before running your application, you have to make two more changes to make the application work properly. Firstly, you have to navigate to the PersistentVolumeClaim and modify the storage resource. This claim helps the user dynamically provision storage to manage the state of your application.

You need a StorageClass to provision storage resources to work with PersistentVolumeClaims. If you’re working with DigitalOcean Kubernetes, your default StorageClass provisioner would be dobs.csi.digitalocean.com. You can use the following command to check it:

Kubectl get storageclass

Provided that you’re using a DigitalOcean cluster, you would get the below output:

NAME PROVISIONER AGE

do-block-storage (default) dobs.csi.digitalocean.com 76m

If you’re not using a DigitalOcean cluster, you have to create a StorageClass and configure a provisioner.

You also must ensure that your storage resource meets the minimum size criteria of your provisioner. So, you need to modify the PersistentVolumeClaim to use the minimum feasible DigitalOcean Block Storage Unit of 1 GB. Therefore, use the following code to open dbdata-persistentvolumeclaim.yaml:

nano dbdata-persistentvolumeclaim.yaml

Now, you need to set the storage value to 1Gi:

~/node_project/dbdata-persistentvolumeclaim.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

creationTimestamp: null

labels:

io.kompose.service: dbdata

name: dbdata

spec:

accessModes:

– ReadWriteOnce

resources:

requests:

storage: 1Gi

status: {}

Save and close the file and proceed to the next step. That is, open the nodejs-service.yaml:

nano nodejs-service.yaml

To expose this Service externally, use a DigitalOcean Load Balancer. Navigate to the Service specification and set LoadBalancer as the Service type:

~/node_project/nodejs-service.yaml

apiVersion: v1

kind: Service

spec:

type: LoadBalancer

ports:

After you’re finished editing, save and close the file.

Step 7: Starting and Accessing the Application

Now, create your Kubernetes objects and test if the application is working as intended. The following command will create the Node application and MongoDB database Services and Deployments. It will also create your ConfigMap, Secret, and PersistentVolumeClaim:

kubectl create -f nodejs-service.yaml,nodejs-deployment.yaml,nodejs-env-configmap.yaml,db-service.yaml,db-deployment.yaml,dbdata-persistentvolumeclaim.yaml,secret.yaml

After the creation of the objects, you will get the following output:

service/nodejs created

deployment.extensions/nodejs created

configmap/nodejs-env created

service/db created

deployment.extensions/db created

persistentvolumeclaim/dbdata created

secret/mongo-secret created

Type the following to check if your Pods are running:

kubectl get pods

Now your db container will start and your Init Container will run. Meanwhile, you will get the following output:

NAME READY STATUS RESTARTS AGE

db-679d658576-kfpsl 0/1 ContainerCreating 0 10s

nodejs-6b9585dc8b-pnsws 0/1 Init:0/1 0 10s

After your database and application containers have started, you will get an output like this:

NAME READY STATUS RESTARTS AGE

db-679d658576-kfpsl 1/1 Running 0 54s

nodejs-6b9585dc8b-pnsws 1/1 Running 0 54s

In case you encounter any unanticipated phases in the STATUS column, you can fix the issue using the commands mentioned below:

kubectl describe pods your_pod

kubectl logs your_pod

Provided that all your containers are running, you can use your application now. Run the following command to get the IP for your LoadBalancer:

kubectl get svc

You will get this output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

db ClusterIP 10.245.189.250 <none> 27017/TCP 93s

kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 25m12s

nodejs LoadBalancer 10.245.15.56 your_lb_ip 80:30729/TCP 93s

When you find an IP, navigate to it by entering http://your_lb_ip in your browser’s search bar. From the landing page, go to the Get Shark Info. It will take you to another page where you can enter a name and character for your Shark. Enter a Shark Name as you like and set the Shark Character. Now, click Submit and wait a few seconds. You will be navigated to a page showing your Shark info.

And, that’s pretty much it. You will now have a single-instance setup of the Node.js application having a MongoDB database administered by a Kubernetes cluster.

Conclusion

Now that you know how to migrate your Docker Compose workflow to Kubernetes, you can proceed even further. The files you’ve created here can be used to implement many things in the later stages of app development. These include centralized logging and monitoring, backup strategies for your Kubernetes objects, and more. Good luck!

Leave a Comment