How To Deploy a Scalable and Secure Django Application with Kubernetes

Photo of author

By admin

Django is a powerful and efficient Python web framework. With its clean and high-level design, it takes care of all the hassles that come with web development so that you can entirely focus on building your app. This free and open-source toolkit has all the applications and tools that you need for your web development. In comparison, Kubernetes is an open-source platform that has the power to enable the operation of a flexible web server framework for cloud applications.

Now, suppose you are a web developer. In that case, it is quite natural that you will be keen on using Django but if you are just getting started in the world of web development, then deploying a secure Django application with Kubernetes is a task that you might find difficult. But you do not need to worry anymore because, in this post, we are going through the entire process so that you can do it easily on your own.

So, let’s begin!

Requirements for Starting the Process

  • First, you will need a Kubernetes 1.15+ cluster with efficient role-based access control (RBAC) enabled. This setup will use a DigitalOcean Kubernetes cluster, but you can create a cluster using a different method if you want.
  • The kubectl command-line tool that has been installed on your local machine is configured to be connected to your cluster on command. If you are still confused, you can read more about installing kubectl in the official documentation.
  • A registered domain name is absolutely essential. This tutorial will use your_domain.com throughout. You can get one for free at Freenom or use any domain registrar that you have already chosen for yourself.
  • You will also need an ingress-nginx Ingress Controller and the cert-manager TLS certificate manager installed into your cluster and configured to issue TLS certificates.
  • A DNS record with your_domain.com pointing to the Ingress Load Balancer’s public IP address.
  • An S3 object storage bucket to store your Django project’s static files, as well as a set of Access Keys for this Space, is required. With a few small changes, you are good to use any file storage service that the Django storage plugins support.
  • Next, you will need a PostgreSQL server instance, database, and user for your Django app. With a few small changes, you can use any database that Django supports.
  • Next, you will need a Docker Hub account and public repository. If you need more information on creating these, please see repositories from the Docker documentation.
  • You will also need the Docker engine installed on your local machine.

Once you have all these requirements set up, you’re ready to start the process.

Steps to Deploy a Django Application with Kubernetes

Here are the essential steps that you need to follow for deploying a Django application with Kubernetes:

Step 1: Clone and Configure the Application

For the first step, we can find the application code and Dockerfile in the polls-docker branch of the Django Tutorial Polls App GitHub repository.

You will notice that the polls-docker branch contains a Dockerized version of this Polls app.

Initially, we will use the git command to clone the polls-docker branch of the Django Tutorial Polls App to the local machine:

$ git clone –single-branch –branch polls-docker https://github.com/do-community/django-polls.git

Then we need to navigate into the django-polls directory:

$ cd django-polls

In the directory, inspect the Dockerfile:

$ cat Dockerfile

FROM python:3.7.4-alpine3.10

ADD django-polls/requirements.txt /app/requirements.txt

RUN set -ex \

&& apk add –no-cache –virtual .build-deps postgresql-dev build-base \

&& python -m venv /env \

&& /env/bin/pip install –upgrade pip \

&& /env/bin/pip install –no-cache-dir -r /app/requirements.txt \

&& runDeps=”$(scanelf –needed –nobanner –recursive /env \

| awk ‘{ gsub(/,/, “\nso:”, $2); print “so:” $2 }’ \

| sort -u \

| xargs -r apk info –installed \

| sort -u)” \

&& apk add –virtual rundeps $runDeps \

&& apk del .build-deps

ADD django-polls /app

WORKDIR /app

ENV VIRTUAL_ENV /env

ENV PATH /env/bin:$PATH

EXPOSE 8000

CMD [“gunicorn”, “–bind”, “:8000”, “–workers”, “3”, “mysite.wsgi”]

Next, we will focus on building the image using docker build:

$ docker build -t polls .

Once we are done with that, we list available images using docker images:

$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

polls latest 80ec4f33aae1 2 weeks ago 197MB

python 3.7.4-alpine3.10 f309434dea3a 8 months ago 98.7MB

Before we run the Django container, we will need to configure its running environment using the env file present in the current directory.

For that open the env file in any editor:

$ nano django-polls/env

DJANGO_SECRET_KEY=

DEBUG=True

DJANGO_ALLOWED_HOSTS=

DATABASE_ENGINE=postgresql_psycopg2

DATABASE_NAME=polls

DATABASE_USERNAME=

DATABASE_PASSWORD=

DATABASE_HOST=

DATABASE_PORT=

STATIC_ACCESS_KEY_ID=

STATIC_SECRET_KEY=

STATIC_BUCKET_NAME=

STATIC_ENDPOINT_URL=

DJANGO_LOGLEVEL=info

Next, you will have to fill in missing values for the following keys:

-DJANGO_SECRET_KEY: A unique, unpredictable value.

-DJANGO_ALLOWED_HOSTS: It secures the app and prevents HTTP Host header attacks. Next, for testing purposes, we need to set this to *, a wildcard that will match all hosts. In production, you should set this to your_domain.com.

-DATABASE_USERNAME: The PostgreSQL database user created.

-DATABASE_NAME: Set this to polls, or the name of the PostgreSQL database was created.

-DATABASE_PASSWORD: The PostgreSQL user password was created.

-DATABASE_HOST: Database’s hostname.

-DATABASE_PORT: Set this to the database’s port.

-STATIC_ACCESS_KEY_ID: Space or object storage’s access key.

-STATIC_SECRET_KEY: Set this to Space or object storage’s access key Secret.

-STATIC_BUCKET_NAME: Set this to the Space name or object storage bucket.

-STATIC_ENDPOINT_URL: Appropriate Spaces or object storage endpoint URL.

Once you are done with this, simply save and close the file.

Step 2: Creating the Database Schema and Upload Assets to Object Storage

Here, we will use the docker run command to override the CMD set in the Dockerfile, and then we will create the database schema using the manage.py makemigrations and manage.py migrate commands:

$ docker run –env-file env polls sh -c “python manage.py makemigrations && python manage.py migrate”

We will also run the polls:latest container image and pass it into the environment variable file and override the Dockerfile command with sh -c “python manage.py makemigrations & python manage.py migrate.” This process will create the database schema defined by the app code.

We will have an output like this:

No changes detected

Operations to perform:

Apply all migrations: admin, auth, contenttypes, polls, sessions

Running migrations:

Applying contenttypes.0001_initial… OK

Applying auth.0001_initial… OK

Applying admin.0001_initial… OK

Applying admin.0002_logentry_remove_auto_add… OK

Applying admin.0003_logentry_add_action_flag_choices… OK

Applying contenttypes.0002_remove_content_type_name… OK

Applying auth.0002_alter_permission_name_max_length… OK

Applying auth.0003_alter_user_email_max_length… OK

Applying auth.0004_alter_user_username_opts… OK

Applying auth.0005_alter_user_last_login_null… OK

Applying auth.0006_require_contenttypes_0002… OK

Applying auth.0007_alter_validators_add_error_messages… OK

Applying auth.0008_alter_user_username_max_length… OK

Applying auth.0009_alter_user_last_name_max_length… OK

Applying auth.0010_alter_group_name_max_length… OK

Applying auth.0011_update_proxy_permissions… OK

Applying polls.0001_initial… OK

Applying sessions.0001_initial… OK

This output indicates that the action was successful.

Then, we will need to run another instance of the app container and use an interactive shell inside it to make an administrative user for the Django project.

$ docker run -i -t –env-file env polls sh

Now, provide a shell prompt inside the running container to help create the Django user with the following command:

$ python manage.py createsuperuser

Next, we need to enter the details for the user, then hit CTRL+D to quit the container and kill it.

Finally, we will be producing the static files for the app and then upload them to the DigitalOcean Space with the help of collectstatic.

$ docker run –env-file env polls sh -c “python manage.py collectstatic –noinput”

Output:

121 static files copied.

Now, we can run the app with the following command:

$ docker run –env-file env -p 80:8000 polls

Output:

[2019-10-17 21:23:36 +0000] [1] [INFO] Starting gunicorn 19.9.0

[2019-10-17 21:23:36 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)

[2019-10-17 21:23:36 +0000] [1] [INFO] Using worker: sync

[2019-10-17 21:23:36 +0000] [7] [INFO] Booting worker with pid: 7

[2019-10-17 21:23:36 +0000] [8] [INFO] Booting worker with pid: 8

[2019-10-17 21:23:36 +0000] [9] [INFO] Booting worker with pid: 9

Next, we will run the default command defined in the Dockerfile and open container port 8000 so that port 80 on the local machine gets mapped to port 8000 of the polls container.

We will be navigating to the polls app with the help of a web browser. Simply type, http://localhost in the URL bar. Since there are no routes that have been defined for the / path, you will possibly receive a 404 Page Not Found error.

Then we will navigate to http://localhost/polls to see the Polls app interface.

To view the administrative interface, visit http://localhost/admin.

After that, we will enter the administrative username and password so that we can access the Polls app’s administrative interface.

Then, we will hit CTRL+C in the terminal window running the Docker container to make sure we kill the container.

Step 3: Pushing the Django App Image to Docker Hub

In the 3rd step, we will push the Django image to the public Docker Hub repository. Also, we can push the image to a private repository.

To start with that, log in to Docker Hub on the local machine with this command:

$ docker login

After that, we need to log in with your Docker ID to push and then pull images from Docker Hub.

If we don’t have a Docker ID, we can create it in just a few minutes from https://hub.docker.com.

The Django image currently has the polls:latest tag. Next, we will need to push it to the Docker Hub repo and to do that, we will have to re-tag the image with the Docker Hub username and repo name:

$ docker tag polls:latest your_dockerhub_username/your_dockerhub_repo_name:latest

And then push the image to the repo:

$ docker push bob/bob-django:latest

Now, since the image is available to Kubernetes on Docker Hub, we can start with rolling it out in the cluster.

Step 4: Set Up the ConfigMap

In Kubernetes, we can inject configuration variables with the help of ConfigMaps and Secrets.

In the beginning, we will create a directory called yaml to store our Kubernetes manifests. Then, we will navigate into the directory.

$ mkdir yaml

$ cd

Launch the text editor to open the file, polls-configmap.yaml:

$ nano polls-configmap.yaml

Now, we will paste the following ConfigMap manifest:

apiVersion: v1

kind: ConfigMap

meta manually base64-encoding Secret values and placing them into a manifest file.

Meanwhile, we can create them using an environment variable file, kubectl create, and the –from-env-file flag.

Here, we will once again use the env file and remove the variables inserted into the ConfigMap. For that, we need to make a copy of the env file called polls-secrets in the yaml directory:

$ cp ../env ./polls-secrets

In any text editor, edit the file

:

$ nano polls-secrets

DJANGO_SECRET_KEY=

DEBUG=True

DJANGO_ALLOWED_HOSTS=

DATABASE_ENGINE=postgresql_psycopg2

DATABASE_NAME=polls

DATABASE_USERNAME=

DATABASE_PASSWORD=

DATABASE_HOST=

DATABASE_PORT=

STATIC_ACCESS_KEY_ID=

STATIC_SECRET_KEY=

STATIC_BUCKET_NAME=

STATIC_ENDPOINT_URL=

DJANGO_LOGLEVEL=info

We will have to delete all the variables inserted into the ConfigMap manifest. After that, the output that we get will look something like this:

DJANGO_SECRET_KEY=your_secret_key

DATABASE_NAME=polls

DATABASE_USERNAME=your_django_db_user

DATABASE_PASSWORD=your_django_db_user_password

DATABASE_HOST=your_db_host

DATABASE_PORT=your_db_port

STATIC_ACCESS_KEY_ID=your_space_access_key

STATIC_SECRET_KEY=your_space_access_key_secret

We also need to make sure to use the same values used earlier. Once that is done, just save and close the file.

Then, create the Secret in the cluster using the following command:

$ kubectl create secret generic polls-secret –from-env-file=poll-secrets

We can then inspect the Secret using kubectl describe:

$ kubectl describe secret polls-secret

Output:

Name: polls-secret

Namespace: default

Labels: <none>

Annotations: <none>

Type: Opaque

Data

====

DATABASE_PASSWORD: 8 bytes

DATABASE_PORT: 5 bytes

DATABASE_USERNAME: 5 bytes

DJANGO_SECRET_KEY: 14 bytes

STATIC_ACCESS_KEY_ID: 20 bytes

STATIC_SECRET_KEY: 43 bytes

DATABASE_HOST: 47 bytes

DATABASE_NAME: 5 bytes

Currently, we have stored the app’s configuration in the Kubernetes cluster using the Secret and ConfigMap object types.

Step 6: Launching the Django App Using a Deployment

Moving on, we will also need to create a Deployment for our Django app.

We already know that deployments control one or more Pods. Pods encapsulate one or more containers.

In any editor, just open this file, polls-deployment.yaml with the following command:

$ nano polls-deployment.yaml

Now, simply paste in the following Deployment manifest:

apiVersion: apps/v1

kind: Deployment

metadata:

name: polls-app

labels:

app: polls

spec:

replicas: 2

selector:

matchLabels:

app: polls

template:

metadata:

labels:

app: polls

spec:

containers:

– image: your_dockerhub_username/app_repo_name:latest

name: polls

envFrom:

– secretRef:

name: polls-secret

– configMapRef:

name: polls-config

ports:

– containerPort: 8000

name: gunicorn

Next, you will have to fill in the appropriate container image name. For this, you can refer to the Django Polls image we pushed to Docker Hub.

Now, we also need to define one Kubernetes Deployment known as polls-app and mark it with the key-value pair app: polls. We would also like to run two replicas of the Pod defined under the template field.

Using envFrom with secretRef and configMapRef, we specify that all the data from the polls-secret Secret and polls-config ConfigMap should be injected into the containers as environment variables.

In the final step, we will expose containerPort 8000 and name it gunicorn.

Eventually, save and close it.

Then we will create the Deployment in the cluster using the kubectl apply -f command as shown below:

$ kubectl apply -f polls-deployment.yaml

$ deployment.apps/polls-app created

Make sure you check if the Deployment rolled out correctly using kubectl get:

$ kubectl get deploy polls-app

Output:

NAME READY UP-TO-DATE AVAILABLE AGE

polls-app 2/2 2 2 6m38s

In case we encounter an error, we can use kubectl describe to inspect the failure:

$ kubectl describe deploy

We can also inspect the two Pods using the following command:

$ kubectl get pod

Output:

NAME READY STATUS RESTARTS AGE

polls-app-847f8ccbf4-2stf7 1/1 Running 0 6m42s

polls-app-847f8ccbf4-tqpwm 1/1 Running 0 6m57s

Step 7: Permitting External Access Using a Service

In this step, we will create a Service for the Django app.

Next, to ensure that everything is working properly, we will have to create a temporary NodePort Service to enter the Django app.

Initially, create the file, polls-svc.yaml by using the following command:

$ nano polls-svc.yaml

We will have to paste in the following Service manifest:

apiVersion: v1

kind: Service

metadata:

name: polls

labels:

app: polls

spec:

type: NodePort

selector:

app: polls

ports:

– port: 8000

targetPort: 8000

After that, save the file and close it.

Roll out the Service using the kubectl apply command as shown below:

$ kubectl apply -f polls-svc.yaml

Output:

service/polls created

In order to confirm that Service is created successfully, we use the kubectl get svc command as shown below:

$ kubectl get svc pollshttps://smallseotools.com/asets/images/cleartext.svg

Output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

polls NodePort 10.245.197.189 <none> 8000:32654/TCP 59s

In this step, the output shows the Service’s cluster-internal IP and NodePort (32654).

In order to connect to the Service, we will need the external IP addresses for our cluster nodes:

$ kubectl get node -o wide

Output:

NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME

pool-7no0qd9e0-364fd Ready <none> 27h v1.18.8 10.118.0.5 203.0.113.1 Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9

pool-7no0qd9e0-364fi Ready <none> 27h v1.18.8 10.118.0.4 203.0.113.2 Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9

pool-7no0qd9e0-364fv Ready <none> 27h v1.18.8 10.118.0.3 203.0.113.3 Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9

Open a web browser and visit the Polls app using any Node’s external IP address and the NodePort. Given the output above, the app’s URL would be http://203.0.113.1:32654/polls.

We should also see the same Polls app interface that we accessed locally.

We can replicate the same test using the /admin route: http://203.0.113.1:32654/admin.

Step 8: Configuring HTTPS Using Nginx Ingress and cert-manager

In the final step, we need to secure external traffic to the app using HTTPS. To do that, we will use the ingress-nginx Ingress Controller and create an Ingress object to route external traffic to the polls Kubernetes Service.

Before going forward with this step, we should remove the echo-ingress Ingress created already with the following command:

$ kubectl delete ingress echo-ingress

We also need to delete the dummy Services and Deployments with the help of kubectl delete svc and kubectl delete deploy commands.

We should also create a DNS A record with your_domain.com pointing to the Ingress Load Balancer’s public IP address.

Once we get a record pointing to the Ingress Controller Load Balancer, we can create an Ingress for your_domain.com and the polls Service.

In any editor, open the file polls-ingress.yaml using the command below:

$ nano polls-ingress.yaml

Paste in the following Ingress manifest:

apiVersion: networking.k8s.io/v1beta1

kind: Ingress

metadata:

name: polls-ingress

annotations:

kubernetes.io/ingress.class: “nginx”

cert-manager.io/cluster-issuer: “letsencrypt-staging”

spec:

tls:

– hosts:

– your_domain.com

secretName: polls-tls

rules:

– host: your_domain.com

http:

paths:

– backend:

serviceName: polls

servicePort: 8000

Once done, save and close the file.

Now, create the Ingress in the cluster using kubectl apply command as shown below:

$ kubectl apply -f polls-ingress.yaml

Output:

ingress.networking.k8s.io/polls-ingress created

Here, we can use kubectl describe to track the state of the Ingress:

$ kubectl describe ingress polls-ingress

Output:

Name: polls-ingress

Namespace: default

Address: workaround.your_domain.com

Default backend: default-http-backend:80 (<error: endpoints “default-http-backend” not found>)

TLS:

polls-tls terminates your_domain.com

Rules:

Host Path Backends

—- —- ——–

your_domain.com

polls:8000 (10.244.0.207:8000,10.244.0.53:8000)

Annotations: cert-manager.io/cluster-issuer: letsencrypt-staging

kubernetes.io/ingress.class: nginx

Events:

Type Reason Age From Message

—- —— —- —- ——-

Normal CREATE 51s nginx-ingress-controller Ingress default/polls-ingress

Normal CreateCertificate 51s cert-manager Successfully created Certificate “polls-tls”

Normal UPDATE 25s nginx-ingress-controller Ingress default/polls-ingress

We can also run describe on the polls-tls Certificate to ensure its successful creation with the command mentioned below:

$ kubectl describe certificate polls-tls

Output:

. . .

Events:

Type Reason Age From Message

—- —— —- —- ——-

Normal Issuing 3m33s cert-manager Issuing certificate as Secret does not exist.

Normal Generated 3m32s cert-manager Stored new private key in temporary Secret resource “polls-tls-v9lv9”

Normal Requested 3m32s cert-manager Created new CertificateRequest resource “polls-tls-drx9c”

Normal Issuing 2m58s cert-manager The certificate has been successfully issued

Given that we used the staging ClusterIssuer, navigating to your_domain.com will help us go to the error page.

If we want to send a test request, we will have to use wget from the command line:

$ wget -O – http://your_domain.com/polls

Output:

. . .

ERROR: cannot verify your_domain.com’s certificate, issued by ‘CN=Fake LE Intermediate X1’:

Unable to locally verify the issuer’s authority.

To connect to your_domain.com insecurely, use `–no-check-certificate’.

Here, we can use the –no-check-certificate flag to bypass certificate validation:

$ wget –no-check-certificate -q -O – http://your_domain.com/polls

<link rel=”stylesheet” type=”text/css” href=”https://your_space.nyc3.digitaloceanspaces.com/django-polls/static/polls/style.css”>

<p>No polls are available.</p>

Now, we can modify the Ingress to use the production ClusterIssuer.

Once again open polls-ingress.yaml for editing:

$ nano polls-ingress.yaml

Modify the cluster-issuer annotation:

apiVersion: networking.k8s.io/v1beta1

kind: Ingress

metadata:

name: polls-ingress

annotations:

kubernetes.io/ingress.class: “nginx”

cert-manager.io/cluster-issuer: “letsencrypt-prod”

spec:

tls:

– hosts:

– your_domain.com

secretName: polls-tls

rules:

– host: your_domain.com

http:

paths:

– backend:

serviceName: polls

servicePort: 8000

After that, save and close the file. Update the Ingress using kubectl apply:

$ kubectl apply -f polls-ingress.yaml

ingress.networking.k8s.io/polls-ingress configured

The kubectl describe certificate polls-tls and kubectl describe ingress polls-ingress can help track the certificate issuance status:

$ kubectl describe ingress polls-ingress

Output:

. . .

Events:

Type Reason Age From Message

—- —— —- —- ——-

Normal CREATE 23m nginx-ingress-controller Ingress default/polls-ingress

Normal CreateCertificate 23m cert-manager Successfully created Certificate “polls-tls”

Normal UPDATE 76s (x2 over 22m) nginx-ingress-controller Ingress default/polls-ingress

Normal UpdateCertificate 76s cert-manager Successfully updated Certificate “polls-tls”

We will navigate to your_domain.com/polls in the web browser to confirm that HTTPS encryption is enabled and everything is functioning well. We should see the Polls app interface.

Next, verify that HTTPS encryption is active in the web browser.

Finally, before completing the process, we can switch the polls Service type from NodePort to the internal-only ClusterIP type (however this is completely optional).

Modify polls-svc.yaml using any editor:

$ nano polls-svc.yaml

Change the type from NodePort to ClusterIP:

apiVersion: v1

kind: Service

metadata:

name: polls

labels:

app: polls

spec:

type: ClusterIP

selector:

app: polls

ports:

– port: 8000

targetPort: 8000

Once done, save the file and close it.

Then roll out the changes using kubectl apply:

$ kubectl apply -f polls-svc.yaml –force

service/polls configured

Make sure you confirm that the Service was modified using kubectl get svc:

$ kubectl get svc polls

Output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

polls ClusterIP 10.245.203.186 <none> 8000/TCP 22s

And now this output shows that the Service type is now ClusterIP. The only way to access it is via our domain and the Ingress created.

Conclusion

Since Kubernetes is a high-level open-source container orchestrator, this is how you deploy a scalable and secure Django application. We hope this article was helpful and all your queries were cleared and for any further doubts, feel free to reach out to us in the comment section below.

Leave a Comment