How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes

Photo of author

By admin

With Kubernetes Ingresses, you can flexibly route traffic from outside your Kubernetes cluster to Services that are present inside of your cluster. This task is performed with the use of Ingress Resources and Ingress Controllers. Ingress Resources aid in defining the rules which facilitate routing of HTTP and HTTPS traffic to Kubernetes Services. Ingress Controllers, on the other hand, aid in implementing the rules by load balancing traffic. These controllers also route it to the proper backend services.

Some of the most popular Ingress Controllers are Nginx, HAProxy, Traefik, and Contour. Ingresses help in providing an alternative of setting up multiple LoadBalancer services. All of these services have their own LoadBalancer.

This post will guide you with setting up the Kubernetes-maintained Nginx Ingress Controller. To support routing of traffic to many dummy backend services, it also creates some Ingress Resources. Once you have set up the Ingress, you will be installing the cert-manager into your cluster. This will help in managing and provisioning TLS certificates. Also, it will help you in encrypting HTTP traffic to the Ingress.

Before you proceed with the steps for setting up an Nginx Ingress with a cert-manager on DigitalOcean Kubernetes, there are certain prerequisites that you need to check out in the next section.

Prerequisites

  • A Kubernetes 1.15+ cluster with role-based access control (RBAC) enabled. This setup will utilize a DigitalOcean Kubernetes cluster. However, you can also create a cluster with the use of another kubernetes service.
  • The kubectl command-line tool must be installed on your local machine. It must be configured in a way that it gets connected to your cluster.
  • A domain name and DNS A records that allow you to point to the DigitalOcean Load Balancer used by the Ingress.
  • The wget command-line utility is installed on your local machine. You can also get wget installed with the use of the package manager that is built into your operating system.

How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes?

Here are the steps that will help you to set up an Nginx Ingress with cert-manager on DigitalOcean Kubernetes with ease:

1. Setting up dummy backend services

Before you start deploying the Ingress Controller, you would be required to create and roll out two dummy echo services. This will aid in routing external traffic with the use of Ingress. The echo Services will run the hashicorp/http-echo web server container. It will return to a page that will be containing a text string passed in when the web server is launched.

Now you will be creating and editing a file called echo1.yaml using nano or your favorite editor on your local machine:-

$ nano echo1.yaml

Paste in the following Service and Deployment manifest:

echo1.yaml

apiVersion: v1

kind: Service

metadata:

name: echo1

spec:

ports:

– port: 80

targetPort: 5678

selector:

app: echo1

apiVersion: apps/v1

kind: Deployment

metadata:

name: echo1

spec:

selector:

matchLabels:

app: echo1

replicas: 2

template:

metadata:

labels:

app: echo1

spec:

containers:

– name: echo1

image: hashicorp/http-echo

args:

– “-text=echo1”

ports:

– containerPort: 5678

In this file, you will be defining a Service called echo1. It will be routing traffic to Pods with the app: echo1 label selector. It also accepts TCP traffic on port 80 and further aids in routing it to port 5678, http-echo’s default port.

Then you will be defining a Deployment which is named echo1; it helps in managing the Pods with the app: echo1 Label Selector. You are required to specify that the Deployment should be having 2 Pod replicas and that the Pods should be starting a container named echo1 running the hashicorp/http-echo image. Then you pass in the text parameter and further set it to echo1 so that the http-echo web server will return to echo1. Finally, you will be opening up port 5678 on the Pod container.

Once you get satiated with your dummy Service and Deployment manifest, you can save and close the file.

Then, you will be creating the Kubernetes resources with the use of kubectl apply with the -f flag. It specifies the file that has been just saved as a parameter:

$ kubectl apply -f echo1.yaml

Output:

service/echo1 created

Deployment.apps/echo1 created

Now you need to reevaluate that the Service has been started correctly. You can do so by confirming that it has a ClusterIP. ClusterIP is the internal IP on which the Service gets exposed:

$ kubectl get svc echo1

Output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

echo1 ClusterIP 10.245.222.129 <none> 80/TCP 60s

This points towards the fact that the echo1 Service is now available internally at 10.245.222.129 on port 80. It will further facilitate the forwarding of traffic to containerPort 5678 on the Pods that it selects.

Since the echo1 Service is up and running smoothly, you can repeat the same process for the echo2 Service as well.

Follow these steps to do that:-

You will be creating and opening a file named echo2.yaml:

echo2.yaml

apiVersion: v1

kind: Service

metadata:

name: echo2

spec:

ports:

– port: 80

targetPort: 5678

selector:

app: echo2

apiVersion: apps/v1

kind: Deployment

metadata:

name: echo2

spec:

selector:

matchLabels:

app: echo2

replicas: 1

template:

metadata:

labels:

app: echo2

spec:

containers:

– name: echo2

image: hashicorp/http-echo

args:

– “-text=echo2”

ports:

– containerPort: 5678

Here, you will utilize the same Service and Deployment manifest as above. But this time, you will be naming and relabelling the Service and Deployment echo2. Along with that, you will be creating only 1 Pod replica. You also need to make sure that you set the text parameter to echo2 so that the webserver gets returned to the text echo2.

Save and close the file. Finally, proceed with creating the Kubernetes resources using kubectl:

$ kubectl apply -f echo2.yaml

Output:

service/echo2 created

Deployment.apps/echo2 created

Once again, you need to reevaluate that the Service is up and running:

$ kubectl get svc

You should also be seeing that both the echo1 and echo2 Services are with their respective assigned ClusterIPs.

Output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

echo1 ClusterIP 10.245.222.129 <none> 80/TCP 6m6s

echo2 ClusterIP 10.245.128.224 <none> 80/TCP 6m3s

kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 4d21h

Now since your dummy echo web services are up and running, you can proceed on to rolling out the Nginx Ingress Controller.

2. Setting up the Kubernetes Nginx Ingress Controller

Here, you will be rolling out v0.34.1 of the Kubernetes-maintained Nginx Ingress Controller. The instructions provided in this post are based on those from the official Kubernetes Nginx Ingress Controller installation guide.

The Nginx Ingress Controller comprises a pod. This pod aids in running the Nginx web server. It also watches the Kubernetes Control Plane for new and updated Ingress Resource objects. An Ingress resource can be defined as a list of traffic routing rules for backend Services. The use of Ingress Resources allows you to perform host-based routing: for instance, routing requests that hit web1.your_domain.com to the backend Kubernetes Service web1.

In this particular scenario, since you are deploying the Ingress Controller to a DigitalOcean Kubernetes cluster, the Controller will thereby be creating a LoadBalancer Service that aids in provisioning a DigitalOcean Load Balancer. All external traffic will be directed to it. This Load Balancer will facilitate routing of external traffic to the Ingress Controller Pod running Nginx. Further, it aids in forwarding traffic to the appropriate backend Services.

You will start this process by creating the Nginx Ingress Controller Kubernetes resources. These comprise of ConfigMaps, which contain the Controller’s configuration, Role-based Access Control (RBAC) Roles to grant the Controller access to the Kubernetes API, and the actual Ingress Controller Deployment, which utilizes v0.34.1 of the Nginx Ingress Controller image. To have a look at the full list of these required resources, you can choose to consult the manifest from the Kubernetes Nginx Ingress Controller’s GitHub repo.

In order to create the resources, you can use kubectl apply and the -f flag. It will help you in specifying the manifest file that is hosted on GitHub:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/do/deploy.yaml

You make the use of ‘applying’ command here so that in the future, you can incrementally apply changes to the Ingress Controller objects in place of completely overwriting them.

You would see an output resembling the following:

Output:

namespace/ingress-nginx created

serviceaccount/ingress-nginx created

configmap/ingress-nginx-controller created

clusterrole.rbac.authorization.k8s.io/ingress-nginx created

clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created

role.rbac.authorization.k8s.io/ingress-nginx created

rolebinding.rbac.authorization.k8s.io/ingress-nginx created

service/ingress-nginx-controller-admission created

service/ingress-nginx-controller created

Deployment.apps/ingress-nginx-controller created

validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created

clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created

job.batch/ingress-nginx-admission-create created

job.batch/ingress-nginx-admission-patch created

role.rbac.authorization.k8s.io/ingress-nginx-admission created

rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created

serviceaccount/ingress-nginx-admission created

This output also serves as a convenient summary of all the Ingress Controller objects that you have created from the deploy.yaml manifest.

Now, you need to confirm that the Ingress Controller Pods have started:

$ kubectl get pods -n ingress-nginx \

$ -l app.kubernetes.io/name=ingress-nginx –watch

Output:

NAME READY STATUS RESTARTS AGE

ingress-nginx-admission-create-l2jhk 0/1 Completed 0 13m

ingress-nginx-admission-patch-hsrzf 0/1 Completed 0 13m

ingress-nginx-controller-c96557986-m47rq 1/1 Running 0 13m

To return to your prompt you can press Ctrl+C.

Now, you need to confirm that the DigitalOcean Load Balancer has been successfully created. You can do so by fetching the Service details with kubectl:

$ kubectl get svc –namespace=ingress-nginx

After several minutes, you would be seeing an external IP address. It will be corresponding to the IP address of the DigitalOcean Load Balancer.

Output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

ingress-nginx-controller LoadBalancer 10.245.201.120 203.0.113.0 80:31818/TCP,443:31146/TCP 14m

ingress-nginx-controller-admission ClusterIP 10.245.239.119 <none> 443/TCP 14m

You must note down the Load Balancer’s external IP address. You will be requiring it in a later step.

This load balancer will be receiving traffic on HTTP and HTTPS ports 80 and 443. It also forwards it to the Ingress Controller Pod. The Ingress Controller finally promotes routing the traffic to the appropriate backend Service.

3. Creating the Ingress Resource

We will start this step by creating a minimal Ingress Resource. It will help in routing traffic directed at a given subdomain to a corresponding backend Service.

In this step, you will be making use of the test domain example.com. You can replace this with your own domain name.

Start with creating a simple rule which will route traffic directed at echo1.example.com to the echo1 backend service and traffic directed at echo2.example.com to the echo2 backend service.

Start by opening up a file named echo_ingress.yaml in your favourite editor:

$ nano echo_ingress.yaml

Then, paste in the following ingress definition:

echo_ingress.yaml

apiVersion: networking.k8s.io/v1beta1

kind: Ingress

metadata:

name: echo-ingress

spec:

rules:

– host: echo1.example.com

http:

paths:

– backend:

serviceName: echo1

servicePort: 80

– host: echo2.example.com

http:

paths:

– backend:

serviceName: echo2

servicePort: 80

Once you are finished editing the Ingress rules, you can save and close the file.

Here, you will be creating an Ingress Resource called echo-ingress, and route traffic based on the Host header. An HTTP request Host header helps in specifying the domain name of the target server. Requests with host echo1.example.com will be directed to the echo1 backend set up in Step 1, and requests with host echo2.example.com will be directed to the echo2 backend.

You can now proceed with creating the Ingress using kubectl:

$ kubectl apply -f echo_ingress.yaml

Output:

ingress.networking.k8s.io/echo-ingress created

In order to test the Ingress, you can navigate to your DNS management service and create A records for echo1.example.com and echo2.example.com pointing to the DigitalOcean Load Balancer’s external IP. The Load Balancer’s external IP is the external IP address for the ingress-nginx Service. It is the same external IP address that we fetched in the previous step.

Once the echo1.example.com and echo2.example.com DNS records have been created, you can test the Ingress Controller and Resource that you have created using the curl command-line utility.

From your local machine, curl the echo1 Service:

$ curl echo1.example.com

Output:

echo1

This confirms that your request to echo1.example.com is being correctly routed through the Nginx ingress to the echo1 backend Service.

Now, you are required to perform the same test for the echo2 Service:

$ curl echo2.example.com

Output:

echo2

This helps in verifying that your request to echo2.example.com is correctly routed through the Nginx ingress to the echo2 backend Service.

By this time, you would have successfully set up a minimal Nginx Ingress. It will perform virtual host-based routing.

4. Installing and Continuing Cert-Manager

This step will guide you to install v0.16.1 of cert-manager into your cluster. Cert-Manager is primarily a Kubernetes add-on. It helps in provisioning TLS certificates from Let’s Encrypt and other certificate authorities (CAs). It also helps in managing their life cycles.

Certificates can be automatically requested and configured. This can be done by annotating Ingress Resources, appending a tls section to the Ingress spec, and configuring one or more Issuers or ClusterIssuers to specify your preferred certificate authority. To learn more about Issuer and ClusterIssuer objects, you can go through the official cert-manager documentation on Issuers.

Start by installing the cert-manager and its Custom Resource Definitions (CRDs) like Issuers and ClusterIssuers. You can do so by following the official installation instructions. You must keep in mind that a namespace called cert-manager will be created into which the cert-manager objects will be created:

$ kubectl apply –validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.16.1/cert-manager.yaml

Output:

customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created

customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created

customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created

customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created

. . .

deployment.apps/cert-manager-webhook created

mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

To reevaluate the installation, you can check the cert-manager Namespace for running pods:

$ kubectl get pods –namespace cert-manager

Output:

NAME READY STATUS RESTARTS AGE

cert-manager-578cd6d964-hr5v2 1/1 Running 0 99s

cert-manager-cainjector-5ffff9dd7c-f46gf 1/1 Running 0 100s

cert-manager-webhook-556b9d7dfd-wd5l6 1/1 Running 0 99s

This points out towards successful installation of the cert-manager.

Now before you get started with issuing certificates for our echo1.example.com and echo2.example.com domains, you will be creating an Issuer. It will help in specifying the certificate authority from which you can get the signed x509 certificates. In this guide, you will be using the Let’s Encrypt certificate authority. It helps in providing free TLS certificates. It also offers both a staging server for testing your certificate configuration and a production server that rolls out verifiable TLS certificates.

Proceed with creating a test ClusterIssuer to ensure that the certificate provisioning mechanism is functioning properly. A ClusterIssuer is not namespace-scoped. Certificate resources can even use it in any namespace.

Start with opening a file named staging_issuer.yaml in your favorite text editor:

nano staging_issuer.yaml

Then, paste in the following ClusterIssuer manifest:

staging_issuer.yaml

apiVersion: cert-manager.io/v1alpha2

kind: ClusterIssuer

metadata:

name: letsencrypt-staging

namespace: cert-manager

spec:

acme:

# The ACME server URL

server: https://acme-staging-v02.api.letsencrypt.org/directory

# Email address used for ACME registration

email: your_email_address_here

# Name of a secret used to store the ACME account private key

privateKeySecretRef:

name: letsencrypt-staging

# Enable the HTTP-01 challenge provider

solvers:

– http01:

ingress:

class: Nginx

Here you will specify that you need to create a ClusterIssuer called letsencrypt-staging. You will then be using the Let’s Encrypt staging server. You will later utilize the production server to roll out certificates. But the downside is that the production server rate-limits requests made against it. Hence, for testing purposes, you should use the staging URL.

Then you will be specifying an email address to register the certificate and will create a Kubernetes Secret called letsencrypt-staging. It will store the ACME account’s private key. You also need to use the HTTP-01 challenge mechanism.

Now, you are required to roll out the ClusterIssuer by using the following command:

$ kubectl create -f staging_issuer.yaml

Output:

clusterissuer.cert-manager.io/letsencrypt-staging created

You are then required to repeat this process. It will help in creating the production ClusterIssuer. You must keep in mind that the certificates will only be created after annotating and updating the Ingress resource that has been provisioned in the previous step.

Now, Open a file called prod_issuer.yaml in your favourite editor:

nano prod_issuer.yaml

Then, paste in the following manifest:

prod_issuer.yaml

apiVersion: cert-manager.io/v1alpha2

kind: ClusterIssuer

metadata:

name: letsencrypt-prod

namespace: cert-manager

spec:

acme:

# The ACME server URL

server: https://acme-v02.api.letsencrypt.org/directory

# Email address used for ACME registration

email: your_email_address_here

# Name of a secret used to store the ACME account private key

privateKeySecretRef:

name: letsencrypt-prod

# Enable the HTTP-01 challenge provider

solvers:

– http01:

ingress:

class: Nginx

Note the different ACME server URLs and the letsencrypt-prod secret key name.

When you have finished editing, you can save and close the file.

Then roll out this Issuer using kubectl command like this:

$ kubectl create -f prod_issuer.yaml

Output:

clusterissuer.cert-manager.io/letsencrypt-prod created

Now that you have created your Let’s Encrypt staging and prod ClusterIssuers, you can modify the Ingress Resource that has been created above. You can even enable TLS encryption for the echo1.example.com and echo2.example.com paths.

If you are using DigitalOcean Kubernetes, you are required to start with implementing a workaround. This allows the Pods to communicate with other Pods using the Ingress. If you’re not using DigitalOcean Kubernetes, you can simply jump to Step 6.

5. Enabling Pod Communication through the Load Balancer (optional step)

Before the provisioning of certificates from Let’s Encrypt gets started, the cert-manager will first perform a self-check. It will do so to make sure that Let’s Encrypt is able to reach the cert-manager Pod that validates your domain. You will need to enable Pod-Pod communication through the Nginx Ingress load balancer, for this check to pass on DigitalOcean Kubernetes.

To do this, you will be creating a DNS A record that points to the external IP of the cloud load balancer. It must also annotate the Nginx Ingress Service manifest with this subdomain.

You will start by navigating to your DNS management service. Then you will create an A record for workaround.example.com that will point to the DigitalOcean Load Balancer’s external IP. The Load Balancer’s external IP refers to the external IP address for the ingress-nginx Service, which you fetched in Step 2. Here we use the subdomain workaround but you’re free to use whichever subdomain you prefer.

Once you have created a DNS record that points to the Ingress load balancer, proceed with annotating the Ingress LoadBalancer Service with the do-loadbalancer-hostname annotation. Then open a file named ingress_nginx_svc.yaml in your favorite editor and simply paste in the following LoadBalancer manifest:

ingress_nginx_svc.yaml

apiVersion: v1

kind: Service

metadata:

annotations:

service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: ‘true’

service.beta.kubernetes.io/do-loadbalancer-hostname: “workaround.example.com”

labels:

helm.sh/chart: ingress-nginx-2.11.1

app.kubernetes.io/name: ingress-nginx

app.kubernetes.io/instance: ingress-nginx

app.kubernetes.io/version: 0.34.1

app.kubernetes.io/managed-by: Helm

app.kubernetes.io/component: controller

name: ingress-nginx-controller

namespace: ingress-nginx

spec:

type: LoadBalancer

externalTrafficPolicy: Local

ports:

– name: http

port: 80

protocol: TCP

targetPort: http

– name: https

port: 443

protocol: TCP

targetPort: https

selector:

app.kubernetes.io/name: ingress-nginx

app.kubernetes.io/instance: ingress-nginx

app.kubernetes.io/component: controller

This Service manifest was extracted from the complete Nginx Ingress manifest file that you had installed in Step 2. Keep in mind to copy the Service manifest, which is corresponding to the Nginx Ingress version that you had installed. In this tutorial, the Nginx Ingress version is 0.34.1. Just make sure to set the do-loadbalancer-hostname annotation to the workaround.example.com domain.

When you are finished, you can save and close the file.

Now, modify the running ingress-nginx-controller Service using the kubectl apply command shown below:

kubectl apply -f ingress_nginx_svc.yaml

Output:

service/ingress-nginx-controller configured

This verifies that you’ve annotated the ingress-nginx-controller Service. This also points out that the Pods in your cluster are now able to communicate with one another utilizing this ingress-nginx-controller Load Balancer.

6. Issuing Staging and Production Let’s Encrypt Certificates

To issue a staging TLS certificate for your domains, you will be annotating echo_ingress.yaml with the ClusterIssuer that you created in Step 4. This will further utilize the ingress-shim to automatically create and issue certificates for those domains that are specified in the Ingress manifest.

Now, open up echo_ingress.yaml in your favorite editor:

$ nano echo_ingress.yaml

Proceed with adding the following to the Ingress resource manifest:

echo_ingress.yaml

apiVersion: networking.k8s.io/v1beta1

kind: Ingress

metadata:

name: echo-ingress

annotations:

cert-manager.io/cluster-issuer: “letsencrypt-staging”

spec:

tls:

– hosts:

– echo1.example.com

– echo2.example.com

secretName: echo-tls

rules:

– host: echo1.example.com

http:

paths:

– backend:

serviceName: echo1

servicePort: 80

– host: echo2.example.com

http:

paths:

– backend:

serviceName: echo2

servicePort: 80

Here you will be adding an annotation that will set the cert-manager ClusterIssuer to letsencrypt-staging. This is the same test certificate ClusterIssuer that you created while following Step 4.

You would also be adding a tls block that will specify the hosts you want to acquire the certificates and specify a secretName. This secret will comprise the TLS private key and issued certificate. You just need to ensure that you swap out example.com with the domain for which you have created DNS records.

Now once you have made the changes, you can save and close the file.

Now you will be pushing this update to the existing Ingress object utilizing kubectl apply:

$ kubectl apply -f echo_ingress.yaml

Output:

ingress.networking.k8s.io/echo-ingress configured

You can utilize kubectl describe to trail the state of the Ingress changes you’ve made:

$ kubectl describe Ingress

Output:

Events:

Type Reason Age From Message

—- —— —- —- ——-

Normal UPDATE 6s (x3 over 80m) nginx-ingress-controller Ingress default/echo-ingress

Normal CreateCertificate 6s cert-manager Successfully created Certificate “echo-tls”

Once you have successfully created the certificate, you may run a description on it, which will ascertain its successful creation:

$ kubectl describe certificate

Output:

Events:

Type Reason Age From Message

—- —— —- —- ——-

Normal Requested 64s cert-manager Created new CertificateRequest resource “echo-tls-vscfw”

Normal Issuing 40s cert-manager The certificate has been successfully issued

This ascertains that the TLS certificate has been successfully issued and HTTPS encryption is now effective for the two domains that are configured.

You can now send a request to a backend echo server. It will assess that HTTPS is working correctly.

You need to run the following wget command to send a request to echo1.example.com and print the response headers to STDOUT:

wget –save-headers -O- echo1.example.com

Output:

. . .

HTTP request sent, awaiting response… 308 Permanent Redirect

. . .

ERROR: cannot verify echo1.example.com’s certificate, issued by ‘CN=Fake LE Intermediate X1’:

Unable to locally verify the issuer’s authority.

To connect to echo1.example.com insecurely, use `–no-check-certificate’.

This points out that HTTPS has been successfully enabled, but at the same time the certificate cannot be verified. This is because it’s a fake temporary certificate that has been issued by the Let’s Encrypt staging server.

Now that you have tested that everything works using this temporary fake certificate, you can progress with rolling out production certificates for the two hosts echo1.example.com and echo2.example.com. To do this, you will be using the letsencrypt-prod ClusterIssuer.

You need to update echo_ingress.yaml to use letsencrypt-prod:

$ nano echo_ingress.yaml

Then, make the following changes to the file:

echo_ingress.yaml

apiVersion: networking.k8s.io/v1beta1

kind: Ingress

metadata:

name: echo-ingress

annotations:

cert-manager.io/cluster-issuer: “letsencrypt-prod”

spec:

tls:

– hosts:

– echo1.example.com

– echo2.example.com

secretName: echo-tls

rules:

– host: echo1.example.com

http:

paths:

– backend:

serviceName: echo1

servicePort: 80

– host: echo2.example.com

http:

paths:

– backend:

serviceName: echo2

servicePort: 80

Here, you will be updating the ClusterIssuer name to letsencrypt-prod.

Once you are satiated with the changes that you have made, proceed with saving and closing the file.

Roll out the changes using the kubectl apply command as shown below:

$ kubectl apply -f echo_ingress.yaml

Output:

ingress.networking.k8s.io/echo-ingress configured

Then, wait for a couple of minutes for the Let’s Encrypt production server to issue the certificate. You can also trail its progress by using kubectl describe on the certificate object:

$ kubectl describe certificate echo-tls

Once you see the following output, you may assure that the certificate has been issued successfully:

Normal Issuing 28s cert-manager Issuing certificate as secret was previously issued by ClusterIssuer.cert-manager.io/letsencrypt-staging

Normal Reused 28s cert-manager Reusing private key stored in existing Secret resource “echo-tls”

Normal Requested 28s cert-manager Created new CertificateRequest resource “echo-tls-49gmn”

Normal Issuing 2s (x2 over 4m52s) cert-manager The certificate has been successfully issued.

You will then be performing a test using curl to verify that HTTPS is working correctly:

$ curl echo1.example.com

Output:

<html>

<head><title>308 Permanent Redirect</title></head>

<body>

<center><h1>308 Permanent Redirect</h1></center>

<hr><center>nginx/1.15.9</center>

</body>

</html>

This will point out that HTTP requests are being redirected to use HTTPS.

Run curl on https://echo1.example.com:

$ curl https://echo1.example.com

Output:

echo1

You can also run the previous command with the verbose -v flag. It allows you to dig deeper into the certificate handshake. It also verifies the certificate information.

By the time you reach this point, you would have successfully configured HTTPS utilizing a Let’s Encrypt certificate for your Nginx Ingress.

Final Words

We expect that this post would have guided you well in setting up an Nginx Ingress. This allows you to load balance and route external requests to backend services inside of your Kubernetes Cluster. Also, it facilitates securing the Ingress with the installation of cert-manager certificate provisions and setting up a Let’s Encrypt Certificate for two host paths.

Leave a Comment