How To Deploy a PHP Application with Kubernetes on Ubuntu 16.04

Photo of author

By admin

Kubernetes is the simplest and most efficient open-source container orchestration system. However, along with Kubernetes, you will need the Nginx that works as a PHP-FPM proxy. You will need this when you run a deployable PHP application. But setting up the proxy on Kubernetes is a long and hazardous process. That’s why developers use separate containers in Kubernetes to manage Nginx and PHP. As a result, when there is a new update of the services, you won’t have to go through troubles to rebuild the container images.

This guide will show you how to deploy a PHP 7 application on Kubernetes clusters using Nginx and PHP-FPM services. Both of these services will be in different containers but will not override each other. So, let’s get started with the tutorial on how to deploy a PHP application with Kubernetes on Ubuntu 16.04.

Introduction to Deploying PHP Application with Kubernetes

The main reason why we use Kubernetes (K8s) is to make the containerization of applications easier. It can easily manage workloads and services that help to scale and automate the applications and their data resources. But when you are running PHP applications, you will need an Nginx proxy server. But setting up this proxy in one container is complex and troublesome.

With Kubernetes, though, you can keep the different parts of the services in different containers. You can even reuse these containers and switch them with other containers when needed. We will discuss the steps for deploying PHP applications with Kubernetes on Ubuntu 16.04. But before that, we also need to consider some prerequisites that are mentioned in the next section.

Prerequisites for Deploying PHP Applications with Kubernetes

  • You need at least minimal knowledge regarding the Kubernetes objects. Kubernetes objects include clusters, nodes, ports, pods, etc.
  • You need a Kubernetes cluster that is running on Ubuntu 16.04.
  • Along with a DigitalOcean account, you will need an API access token. The token must have read and write permissions that can create a storage volume.
  • Your application code should be hosted on GitHub or any publicly accessible URL.

Step by Step Process for Deploying a PHP Application with Kubernetes

Step 1: Create PHP-FPM and Nginx Services

The PHP-FPM and Nginx Services will give you access to a set of pods within the cluster. The services that exist within the cluster do not need any unique IP addresses to communicate with each other. They can do that directly using each other’s names. By creating the PHP-FPM service, you can easily access the PHP-FPM pods. In the same way, by creating the Nginx service, you will get access to the Nginx pods.

The method helps this way: Nginx pods will proxy the PHP-FPM pods but the services have to find out the pods on their own. But you can teach the services how to discover the pods using the Kubernetes’ automatic service discovery feature. The tool will help you read the pods’ names as they are written in human language. This will help the services to send and receive requests to the right servers.

The first step to creating the services includes creating an object definition file in the YAML format. The definition object file you find in the orchestration system will be written in the YAML file format, and those files will contain the following units or at least one of them:

apiVersion: This is the version of the Kubernetes API where the definition is from.

kind: This is a Kubernetes object that refers to a pod or service.

Metadata: Metadata is the details of the objects where you will find the object’s name, its category, etc.

Spec: Depending on the kind of object definition file you are building, the spec will contain its configuration information. Here, you will find information like the container image, available container ports, etc.

These definitions are stored in a directory that you will have to create by yourself. You can SSH to your master node to create the directory to store the information, use the command shown below:

mkdir definitions

Then go to the brand new definitions directory:

cd definitions

Make a php_service.yaml file to build the PHP-FPM service using the following command:

nano php_service.yaml

Now you will have to set up kind as a Service which will make the object a service:

apiVersion: v1

kind: Service

Name the service php

metadata:

name: php

The method relies on making a group with the objects and giving them labels individually. You can use the labels to identify each object from one another, and the labels include frontend and backend tiers. Now, look at the command below:

labels:

tier: backend

When the PHP pods run behind the service, it will remember the label names so that it can eliminate any technical error that comes in its way. There are also selector labels that allow the services to determine the pods that need to be used.

The tier: backend label will set the pod in the backend tier.

The app: php label will inform the service that the pod runs in the PHP.

You will have to include both of these labels after the metadata section.

spec:

selector:

app: php

tier: backend

Add port 9000 to the php_service.yaml file in the spec of the object:

ports:

– protocol: TCP

port: 9000

Once you complete the Yaml file, it will appear like this:

apiVersion: v1

kind: Service

metadata:

name: php

labels:

tier: backend

spec:

selector:

app: php

tier: backend

ports:

– protocol: TCP

port: 9000

Now save the file and exit the text editor. You can use the kubectl apply to run the service for which you have created the definition file. The command will go along with the -f argument. Here is the command that creates the service:

kubectl apply -f php_service.yaml

Once you apply the command, the following output will confirm the location where the service is present:

service/php created

To verify if the service is active and running, use this:

kubectl get svc

Here is the output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m

php ClusterIP 10.100.59.238 <none> 9000/TCP 5m

It was how to create and get the PHP service running. Now, it’s time to create the nginx_service.yaml file.

Open your nano editor and type this:

nano nginx_service.yaml

It will search for Nginx pods, and you can name it nginx and label it with tier: backend:

apiVersion: v1

kind: Service

metadata:

name: nginx

labels:

tier: backend

Like we did in the php service, you will have to search for the pods with the app: nginx label and tier: backend. You can make the service accessible on port 80, which is the default HTTP port.

spec:

selector:

app: nginx

tier: backend

ports:

– protocol: TCP

port: 80

After creating the Nginx service, it will be publicly accessible from the Droplet’s public IP address. Go to the DigitalOcean Cloud Panel and find your device’s public IP and then under spec.externalIPs, add the following lines:

spec:

externalIPs:

– your_public_ip

Once done, the nginx_service.yaml file will appear this way:

apiVersion: v1

kind: Service

metadata:

name: nginx

labels:

tier: backend

spec:

selector:

app: nginx

tier: backend

ports:

– protocol: TCP

port: 80

externalIPs:

– your_public_ip

Save and close the editor and begin to create the Nginx service with the following command:

kubectl apply -f nginx_service.yaml

You should do this on another editor window. If the service is running successfully, the output will appear like this:

service/nginx created

With the following command, you can check out all the services that are running on the Kubernetes.

kubectl get svc

And the output should show you both PHP and Nginx services:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m

nginx ClusterIP 10.102.160.47 your_public_ip 80/TCP 50s

php ClusterIP 10.100.59.238 <none> 9000/TCP 8m

You can delete a service if you want by executing the following command:

kubectl delete svc/service_name

Step 2: Install the DigitalOcean Storage Plug-In

You need space in your system to keep the storage plug-ins. You can create the space in DigitalOcean using the DigitalOcean storage plug-in. You can install the plug-in and name the storage class do-block-storage that we can later use to create block storage.

But first, you will need to find your DigitalOcean API token, and for that, you are going to configure a Kubernetes Secret object. The Secret objects are related to SSH keys and passwords, and that’s why they were named “secret”. They are used to share this information with other Kubernetes objects that exist in the same namespace. With namespaces, you can divide the objects from each other through a process.

Open your text editor and open the secret.yaml file.

nano secret.yaml

Add the secret object digitalocean before adding it to the kube-system namespace. This namespace is the default namespace in Kubernetes where DigitalOcean runs its components:

apiVersion: v1

kind: Secret

metadata:

name: digitalocean

namespace: kube-system

Add access-token as stringData:

stringData:

access-token: your-api-token

The file will appear like this:

apiVersion: v1

kind: Secret

metadata:

name: digitalocean

namespace: kube-system

stringData:

access-token: your-api-token

Now, you need to save and exit the file. To create the secret object use command:

kubectl apply -f secret.yaml

The output will be:

secret/digitalocean created

To view the secret, use this command:

kubectl -n kube-system get secret digitalocean.

You will get the following output:

NAME TYPE DATA AGE

digitalocean Opaque 1 41s

Now, install the plug-in with the following command on the terminal:

kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v0.3.0.yaml

Output:

storageclass.storage.k8s.io/do-block-storage created

serviceaccount/csi-attacher created

clusterrole.rbac.authorization.k8s.io/external-attacher-runner created

clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created

service/csi-attacher-doplug-in created

statefulset.apps/csi-attacher-doplug-in created

serviceaccount/csi-provisioner created

clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created

clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created

service/csi-provisioner-doplug-in created

statefulset.apps/csi-provisioner-doplug-in created

serviceaccount/csi-doplug-in created

clusterrole.rbac.authorization.k8s.io/csi-doplug-in created

clusterrolebinding.rbac.authorization.k8s.io/csi-doplug-in created

daemonset.apps/csi-doplug-in created

Everything is installed as needed, and now you can create block storage and store your applications’ configuration files and codes.

Step 3: Create a Persistent Volume

Now that you have created the Secret and installed the block storage plug-in as well, the next step is to create a persistent volume. Persistent volume refers to a certain amount of storage in the Kubernetes cluster that accommodates all the important updates of the pods’ life cycle. You can manage pods or configure them and update them without losing their application code.

You can access the PV using PersistentVolumeClaim. Now, check out the steps below to successfully create this persistent Volume.

Open the following file with your text editor:

nano code_volume.yaml

Add the following parameters and values to the file and name the PVC code:

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: code

The spec of the PVC has a number of items. Let’s look at them below:

  • ReadWriteOnce: This mounts the PV volume as read-write by a single node in the cluster. Where you can read and write the Volume.
  • ReadOnlyMany: This mounts the volume as read-only; it means you cannot modify the volume.
  • ReadWriteMany: Multiple nodes mount the volume as read-write
  • Resources: Resources is the storage space that you need to maintain the containerized applications.

DigitalOcean block storage requires a set up of accessModes to ReadWriteOnce in a single node in the cluster. We will go through the process of creating 1GB storage in the block so that it can be enough to store the application data. In case you want more than that, you can change the storage parameter to meet your requirements. To increase the storage amount, you have to go through the process of persistent volume creation.

spec:

accessModes:

– ReadWriteOnce

resources:

requests:

storage: 1Gi

Use the do-block-storage class made by the DigitalOcean block storage plug-in. This storage class will help Kubernetes specify how to use the provision volume.

storageClassName: do-block-storage

The whole code_volume.yaml file will now appear like this:

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: code

spec:

accessModes:

– ReadWriteOnce

resources:

requests:

storage: 1Gi

storageClassName: do-block-storage

Save and close the file.

After creating the persistent file above, use the using kubectl command to make a code PersistentVolumeClaim:

kubectl apply -f code_volume.yaml

When the following output comes, you can mount the 1GB PVC as a volume, and it also indicates that you have created the object properly:

persistentvolumeclaim/code created

You can view the persistent volumes that are available with this command:

kubectl get pv

Here you will find your PVC:

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

pvc-ca4df10f-ab8c-11e8-b89d-12331aa95b13 1Gi RWO Delete Bound

Step 4: Creating the PHP-FPM Deployment

Now that we have successfully created the Persistent Volume, it’s time to create pods with PHP-FPM Deployment. With Deployments, you can easily build as well as update, and manage your pods with ReplicaSets. If by any chance, an update does not suit well to the pods, the deployment will roll back to the previous version of the pods to restore the container images.

To find out the labels of the pods, you can use the spec.selector Deployment key. It will create the pods that have not been created yet with the template key.

There is a container called Init Container that runs multiple commands before the regular containers shown in the pod’s template key. It fetches the index.php file from the GitHub Gist using the wget command. The content of the index file is as given below:

<?php

echo phpinfo();

Open the php_deployment.yaml file with your editor that will help you create a Deployment:

nano php_deployment.yaml

You will need to rename the Deployment objects to PHP and it will manage the PHP-FPM pods from the backend tier. Use the tier: backend label to regroup the deployments in the backend tier:

apiVersion: apps/v1

kind: Deployment

metadata:

name: php

labels:

tier: backend

In the Deployment spec, you will have to define the number of pods that need to be created with the replicas parameter. The number of replicas depends on the number of available data resources and your requirements. Here we have created one replica in the guide:

spec:

replicas: 1

Add the following info under the selector key section:

selector:

matchLabels:

app: php

tier: backend

Add templates for your pods’ definitions, and the templates will specify the requirements by which the pods will be created. But for that, you will have to add the labels that were shown before for the php service selectors and the deployment’s matchLabels. Add the app: php and tier: backend under template.metadata.labels like this:

template:

metadata:

labels:

app: php

tier: backend

Each pod will have an individual name, but they may contain a different number of containers and volumes. You can mount volumes to a container by your choice, and for that, you will have to specify a mount path for each of the volumes. Under the spec.template.spec.volumes, add the following values:

spec:

volumes:

– name: code

persistentVolumeClaim:

claimName: code

We have used the php:7-fpm image in this container, but you can find other images from the Docker store. Add the following value under the spec.template.spec.containers:

containers:

– name: php

image: php:7-fpm

Now, mount the volumes your containers require that will also run the PHP code, and in order to do so, it will access the code volume. Add the value given below under the spec.template.spec.containers.volumeMounts:

volumeMounts:

– name: code

mountPath: /code

After this, you will get your application code on the Volume. You can copy the code using the Init Container. You can use one initContainer and run a script to build your application. Alternatively, you can use one initContainer with every command. And you will have to make sure that the volumes are mounted to the initContainer.

We have used one init container with busybox and downloaded the code. You will find the wget utility in the busybox image. Specify the busybox image under spec.template.spec:

initContainers:

– name: install

image: busybox

Mount the volume code at the /code path under spec.template.spec.initContainers.

volumeMounts:

– name: code

mountPath: /code

Add the following lines under install container in spec.template.spec.initContainers:

command:

– wget

– “-O”

– “/code/index.php”

– https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php

The completed php_deployment.yaml file will appear this way:

apiVersion: apps/v1

kind: Deployment

metadata:

name: php

labels:

tier: backend

spec:

replicas: 1

selector:

matchLabels:

app: php

tier: backend

template:

metadata:

labels:

app: php

tier: backend

spec:

volumes:

– name: code

persistentVolumeClaim:

claimName: code

containers:

– name: php

image: php:7-fpm

volumeMounts:

– name: code

mountPath: /code

initContainers:

– name: install

image: busybox

volumeMounts:

– name: code

mountPath: /code

command:

– wget

– “-O”

– “/code/index.php”

– https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php

Save the file and close the editor.

Now create the deployment with kubectl:

kubectl apply -f php_deployment.yaml

Output:

deployment.apps/php created

Now, you will have to download the specified images and the deployment will request the PersistentVolume from the PersistentVolumeClaim and run the initContainers in order. After completion, the container will run volumes to the mount point specified in the deployment. To view the running deployment, use this command:

kubectl get deployments

Output:

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

php 1 1 1 0 19s

To view the pods started in the deployment, use this command:

kubectl get pods

Output:

NAME READY STATUS RESTARTS AGE

php-86d59fd666-bf8zd 0/1 Init:0/1 0 9s

Ready: means that the replicas are running the pod.

Status: is the status of the pods, and the output says 0 out of 1 container has finished running.

Restarts: This indicates the number of times the process has restarted to start the pod. If Init Containers fail to run, the number of restarts will grow. The deployment will continue to restart until a certain time period.

It may take a couple of minutes for the status to go podInitializing. But if the startup script is complicated, it may take more time than that.

NAME READY STATUS RESTARTS AGE

php-86d59fd666-lkwgn 0/1 podInitializing 0 39s

The above output indicates that the Init Containers have finished and they are processing. The pod’s status will change to Running.

NAME READY STATUS RESTARTS AGE

php-86d59fd666-lkwgn 1/1 Running 0 1m

Step 5: Create the Nginx Deployment

Now that you have mounted the application code to the PHP-FPM service, it will be available for connections. After that, you can create the Nginx Deployment by using a ConfigMap to configure the Nginx service. The ConfigMap has the configurations of the service in key-value format. The format will help you access other Kubernetes object definitions.

You can also reuse and modify the images of the containers with different Nginx versions when required. Updating the ConfigMap file will apply the changes to the pods automatically that are mounting the same file. Now, create the following file with your text editor:

nano nginx_configMap.yaml

Name it nginx-config and group it with the tier: backend micro-service like this:

apiVersion: v1

kind: ConfigMap

metadata:

name: nginx-config

labels:

tier: backend

Now, add the data for the configuration file, name the key to config and use the content of the Nginx configuration file as a value. Add the code below to your configuration file:

data:

config : |

server {

index index.php index.html;

error_log /var/log/nginx/error.log;

access_log /var/log/nginx/access.log;

root ^/code^;

location / {

try_files $uri $uri/ /index.php?$query_string;

}

location ~ \.php$ {

try_files $uri =404;

fastcgi_split_path_info ^(.+\.php)(/.+)$;

fastcgi_pass php:9000;

fastcgi_index index.php;

include fastcgi_params;

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

fastcgi_param PATH_INFO $fastcgi_path_info;

}

}

Your nginx_configMap.yaml will contain the name of your PHP-FPM service in the fastcgi_pass parameter and not the IP address. Your configuration should contain the following lines:

apiVersion: v1

kind: ConfigMap

metadata:

name: nginx-config

labels:

tier: backend

data:

config : |

server {

index index.php index.html;

error_log /var/log/nginx/error.log;

access_log /var/log/nginx/access.log;

root /code;

location / {

try_files $uri $uri/ /index.php?$query_string;

}

location ~ \.php$ {

try_files $uri =404;

fastcgi_split_path_info ^(.+\.php)(/.+)$;

fastcgi_pass php:9000;

fastcgi_index index.php;

include fastcgi_params;

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

fastcgi_param PATH_INFO $fastcgi_path_info;

}

}

Now save the file and exit the text editor.

Create the ConfigMap with this command:

kubectl apply -f nginx_configMap.yaml

Output:

configmap/nginx-config created

Open a new nginx_deployment.yaml file in the text editor:

nano nginx_deployment.yaml

Name it nginx and add the label tier: backend:

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx

labels:

tier: backend

The deployment will contain one replica, and you will have to mention that to the spec. These are the parameter values:

spec:

replicas: 1

selector:

matchLabels:

app: nginx

tier: backend

Add the template pod. Use the same labels you added for the Deployment selector.matchLabels:

template:

metadata:

labels:

app: nginx

tier: backend

Now give Ngnix access to the code PVC we have created earlier and add the following value under the spec.template.spec.volumes section:

spec:

volumes:

– name: code

persistentVolumeClaim:

claimName: code

You can use ConfigMap by resetting the name of the file that holds the details of the key to the path. Create a site.conf file from the config key, and for doing so, add the following value under the spec.template.spec.volumes section:

– name: config

configMap:

name: nginx-config

items:

– key: config

path: site.conf

Mention the image where you want to create the pod from. Here, we have used the nginx:1.7.9 image, but you can go to the Docker Store to find images of your choice. Make the service available on port 80 by adding the following value under spec.template.spec:

containers:

– name: nginx

image: nginx:1.7.9

ports:

– containerPort: 80

Now mount the code volume at /code:

volumeMounts:

– name: code

mountPath: /code

Paste the following under volumeMounts:

– name: config

mountPath: /etc/nginx/conf.d

Here is your nginx_deployment.yaml file:

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx

labels:

tier: backend

spec:

replicas: 1

selector:

matchLabels:

app: nginx

tier: backend

template:

metadata:

labels:

app: nginx

tier: backend

spec:

volumes:

– name: code

persistentVolumeClaim:

claimName: code

– name: config

configMap:

name: nginx-config

items:

– key: config

path: site.conf

containers:

– name: nginx

image: nginx:1.7.9

ports:

– containerPort: 80

volumeMounts:

– name: code

mountPath: /code

– name: config

mountPath: /etc/nginx/conf.d

Save and close the file.

To create the Nginx deployment, run the following command:

kubectl apply -f nginx_deployment.yaml

Output:

deployment.apps/nginx created

List the deployments with the command:

kubectl get deployments

Output:

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

nginx 1 1 1 0 16s

php 1 1 1 1 7m

The following command will list the pods managed by the deployments:

kubectl get pods

Output:

NAME READY STATUS RESTARTS AGE

nginx-7bf5476b6f-zppml 1/1 Running 0 32s

php-86d59fd666-lkwgn 1/1 Running 0 7m

Go to your browser and visit http://your_public_ip, where you will see the output of the php_info(). The output will confirm that your Kubernetes services are running and updated.

Conclusion

We hope that this guide helps you store your application code on a volume that allows updating the PHP applications in the future without losing a single line of code. The deployment of PHP applications with Kubernetes on Ubuntu 16.4 improves the scalability of your Kubernetes applications.

Leave a Comment