Manage your Cloudflare domains automatically with an Nginx Ingress controller and External DNS, together with SSL Certificates through Cert Manager

So you have created a Kubernetes cluster with some pods installed on it and are ready to create your production application. But how do you now get started by routing your domain to it? And even more so, how do you do this in a secure way, generating SSL certificates (for inter cluster communication) and manage your domain in an automated way so the pods link to these domains?

Well, this is where Cloudflare, External DNS and Cert Manager come into action!

Prerequisites

  • An installed and configured (kubectl) Azure Kubernetes Cluster
  • Helm installed (brew install helm)
  • Cloudflare (API Token & Email)

Setting Local Variables

First start by creating the local variables that will be used throughout this post. For Cloudflare, create an API token with the Zone DNS:Edit permissions.

export NS_NAME_INGRESS=ingress-nginx
export NS_NAME_CERT_DNS=domain-cert-dns
export CF_API_EMAIL='<YOUR_EMAIL>'

# Create token with "Zone, DNS, Edit" permissions
# https://dash.cloudflare.com/profile/api-tokens
export CF_API_KEY=<YOUR_CLOUDFLARE_TOKEN>

Creating Namespace & Secrets

We isolate all our resources for easy deletion later and create 2 separate namespaces:

  • Ingress Namespace: Manages the nginx ingress controller and allows us to easily scale it out later.
  • Certificates & External DNS Resources Namespace: Contains all the resources for managing our certificates as well as the external dns configuration on Cloudflare in our case
# Create namespaces
kubectl create namespace $NS_NAME_INGRESS
kubectl create namespace $NS_NAME_CERT_DNS

kubectl create secret generic cloudflare --namespace $NS_NAME_CERT_DNS \
    --from-literal=cf-api-key=$CF_API_KEY \
    --from-literal=cf-api-email=$CF_API_EMAIL

Creating Ingress Controller

The first real service we create is the ingress controller. This is our primary way into the cluster once we access the IP. The IP itself won't directly return a response, but the domains will be routed towards the correct service running on the cluster.

In other words, if we attempt to access example.com it will translate to an IP A.B.C.D which is returned from the Nginx Ingress Controller LoadBalancer route, which will then return service my-example-service for a given pod.

Note: we need to ensure the health probes are correct for Azure! So we provide the extra annotation here.
# Install Ingress Controller
# this is our main entrypoint to the cluster
# note: we apply https://github.com/kubernetes/ingress-nginx/issues/10863
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx \
    --namespace $NS_NAME_INGRESS \
    --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
    --wait

export INGRESS_IP=$(kubectl get svc -n $NS_NAME_INGRESS ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Create External DNS

We have our main Entry Point in the cluster, but how do we route from domain example.com to pod my-pod? Typically, we do this by setting up a manual configuration from the ingress controller to the pod, and configuring the domain manually to resolve this IP address.

But, as automation is always key, we can automate this with a service named External DNS which does this for us! It integrates with our DNS provider of choice (in this case Cloudflare) and configures the domains to point to the correct IP Address (the Ingress Controller).

# Install External DNS (and configure with Cloudflare)
# this will automatically update the DNS records in Cloudflare
helm repo add kubernetes-sigs https://kubernetes-sigs.github.io/external-dns/
helm repo update
helm upgrade --install external-dns kubernetes-sigs/external-dns \
  --namespace $NS_NAME_CERT_DNS \
  --set "provider.name=cloudflare" \
  --set "env[0].name=CF_API_TOKEN" \
  --set "env[0].valueFrom.secretKeyRef.name=cloudflare" \
  --set "env[0].valueFrom.secretKeyRef.key=cf-api-key" \
  --set "env[1].name=CF_API_EMAIL" \
  --set "env[1].valueFrom.secretKeyRef.name=cloudflare" \
  --set "env[1].valueFrom.secretKeyRef.key=cf-api-email" \
  --wait --timeout 600s

Creating Cert Manager & Issuer

Now we have the domains loading, let's provide them with an SSL certificate for secure communication. A trusted authority must sign these SSL certificates, so we use Cert Manager and LetsEncrypt to do so for us.

How this works is that it will use the configured domain before, and once they are configured, a certificate will be created. To create these certificates, the Cert Manager will open an endpoint (ACME Challenge for HTTP) to validate that we actually own the domain. Once this validation is done, it will provide us back with a certificate that we save.

This ACME Challenge is configured through a Cluster Issuer resource that we create below.

# Install Cert Manager
# this will manage our certificates for the domain and automatically renew them
helm repo add jetstack https://charts.jetstack.io --force-update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace $NS_NAME_CERT_DNS \
  --create-namespace \
   --set cdrs.enabled=true

# Create Cluster Issuers
# note: there are 2 issuers, a production and staging one. When changing, delete the old certificates (see `kubectl delete certificate -n NS ...` and `kubectl get certificate -A`)
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: $NS_NAME_CERT_DNS
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: $CF_API_EMAIL
    privateKeySecretRef:
      name: letsencrypt-prod-private-key
    solvers:
    - http01:
        ingress:
          class: nginx
EOF

cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-stag
  namespace: $NS_NAME_CERT_DNS
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: $CF_API_EMAIL
    privateKeySecretRef:
      name: letsencrypt-stag-private-key
    solvers:
    - http01:
        ingress:
          class: nginx
EOF

Create Resources

We are finally ready to deploy our application and get a domain bound to it! This phase consist out of 2 steps:

  1. Creating the actual deployment and service, allowing us to run our application and get a Port allocated (and internal IP address - ClusterIP) so we can route to it from within the cluster.
  2. Creating an Ingress Route, which will state which domain URL we want to connect to the specific service.

Create the Deployment and Service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-deployment
  namespace: example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
        - name: backend
          image: your-repo-url/backend:latest
          ports:
            - containerPort: 8000
          env:
            - name: NODE_ENV
              value: production
            - name: PORT
              value: "8000"
            - name: HOST
              value: "0.0.0.0"
---
apiVersion: v1
kind: Service
metadata:
  name: backend-service
  namespace: example
spec:
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8000
  type: ClusterIP

Create the Ingress Route

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: backend-ingress
  namespace: topikai
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    external-dns.alpha.kubernetes.io/hostname: api.example.com
    external-dns.alpha.kubernetes.io/ttl: "120"
spec:
  ingressClassName: nginx
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: backend-service
            port:
              number: 80
  tls:
  - hosts:
    - api.example.com  # Use your domain here
    secretName: backend-tls-secret  # Cert will populate this secret

Validate, View Logs and View Certificates

Ok! We are now all set and should be able to connect to our services! This might take a couple of minutes as domains sometimes take a bit to propagate. Once done, we should be able to load our URL with a signed certificate!

To validate everything we just did, feel free to check the snippet below that gives you an easy overview of how to view the logs and monitor the issued certificates.

# Validate
kubectl get all -n $NS_NAME_CERT_DNS

# View logs
kubectl logs -n $NS_NAME_CERT_DNS -l app.kubernetes.io/name=external-dns -f
kubectl logs -f -n $NS_NAME_CERT_DNS deploy/cert-manager
kubectl logs -f -n $NS_NAME_CERT_DNS deploy/external-dns
kubectl logs -f -n $NS_NAME_INGRESS deploy/ingress-nginx-controller

# Monitor issued certificates (and if they are ready)
# note: ceritifcates that are not ready will be with a random suffix
# see cert-manager logs for more info
kubectl get certs -A

# Trigger manual certification recreation
kubectl delete certificate backend-tls-secret -n topikai

# Check the SSL Certificate
openssl s_client -connect my-service.example.com:443