Dynamic Sub-Domain Reverse Proxying in Kubernetes with Nginx

5/2/2024
7 min read

In this article, I explain how we can dynamically point sub-domains to specific services in a Kubernetes cluster through a single ingress by setting up a dynamic reverse proxy service using nginx.

TLDR; name instance services same as the subdomain, create a nginx router service, capture subdomain pattern using regex in the nginx config and proxy pass to the service using kubernetes local addressing.

1 server {
2     listen 80;
3
4     server_name ~^(?<subdomain>.*?)\.;
5     resolver kube-dns.kube-system.svc.cluster.local valid=5s;
6
7     location / {
8       proxy_pass http://$subdomain.your-namespace.svc.cluster.local;
9     }
10 }
11

What are we going to do?

Consider you have a single tenant application, and want to deploy a copy or instance of your web application, database and other services, for each client when they sign up. You would give each client their copy/installation of your service in your subdomain.

Requirement

By using an orchestration platform like Kubernetes and through Kube APIs, we can spin up instances on-demand programmatically. To do this, let’s for each client, create a deployment with the containers required for the application, and then make a service uniquely named after the client.

Kubernetes Architecture

Once we have got Kubernetes to deploy the client’s instance and have a service created, now comes the part where we need to setup a dynamic reverse proxy such that from the subdomain we assign to the client, it takes the user to the client’s instance correctly.

One can of-course create a separate ingress for each of the client, and expose the client instance to the subdomain. However, what if you do not want to create a separate ingress for each client?

Let us now explore how through a single ingress and by creating a reverse proxy using nginx we can achieve the same.

Architecture

As neither the ingress configuration nor the default ingress-nginx controller allows us to do the dynamic proxying as we require, we shall deploy another nginx router service tasked to do this routing for us. All requests from our ingress shall flow into this service, and then be routed to the corresponding instance service based on their subdomain.

Solution Arch

Implementation

Here is how we will implement this,

  • Create an nginx deployment using the official Nginx image.
  • Use configmap to mount Nginx default.conf with our custom proxy configuration.
  • Create a service for this Nginx deployment
  • Create an ingress with *.jobspage.io pointing to this nginx service.

nginx.conf

We need to set up our nginx configuration to capture the subdomain and proxy it to the service named the same. The core of the conf is -

1server {
2  listen 80;
3
4  server_name ~^(?<subdomain>.*?)\.;
5  resolver kube-dns.kube-system.svc.cluster.local valid=5s;
6
7  location / {
8    proxy_pass http://$subdomain.your-namespace.svc.cluster.local;
9    proxy_set_header Host $host;
10  }
11} 
12

So what's going on here?

  • server_name - we use a regex pattern that pulls out the first block of the hostname to a variable - subdomain which we then use below.
    • The above regex will match any domain, however, you may make it more specific like - ~^(?<svc>[\w-]+)\.jobspage\.io$ to match only <svc>.jobspage.io
  • resolver - as we are using a variable inside the proxy_pass path, we need to provide Nginx with a DNS resolver, and for that we shall use the Kube DNS service of the cluster.
    • We would not have had to explicitly set the resolver had we used a static path like airbnb.my-namespace.svc.cluster.local, however, since we are using a variable inside the path $subdomain.my-namespace.svc.cluster.local we have to set a DNS resolver explicitly.
    • Kubernetes cluster runs its DNS at kube-dns.kube-system.svc.cluster.localand this service can resolve local Kubernetes addressing, and send it to the actual local IP of the service inside the cluster.
  • proxy_pass - we use the subdomain captured as a variable from the server_name config above, and use Kubernetes local addressing to proxy the request to the corresponding service.
  • proxy_set_header HOST - we have to the HOST header, as otherwise it will be set as subdomain.your-namespace.svc.cluster.local instead of [subdomain.your-domain.com](http://subdomain.your-domain.com)

You may additionally, add a health check endpoint, a custom page to show instead of 502 page when a non-existent domain is queried, and support for web-sockets, by modifying the server config -

1server {
2  listen 80;
3
4  server_name ~^(?<subdomain>.*?)\.;
5  resolver kube-dns.kube-system.svc.cluster.local valid=5s;
6  
7  # a simple health check endpoint
8  location /healthz {
9    return 200 '$subdomain resolved.'
10  }
11  
12  # custom 502 page to show in case non-existent sub-domain is queried
13  error_page 502 /502.html;
14  location = /502.html {
15      root /usr/share/nginx/html/; # change path as needed, and mount HTML to path
16  }
17
18  location / {
19    proxy_pass http://$subdomain.your-namespace.svc.cluster.local;
20    proxy_set_header Host $host;
21    
22    # to support web sockets
23    proxy_set_header Upgrade $http_upgrade;
24    proxy_set_header Connection "Upgrade";
25
26  }
27  
28} 
29

configmap

To install or set this custom nginx configuration we have onto the official nginx image we are running in the container, we can mount this configuration to /etc/nginx/conf.d/default.conf using a configmap and volume mount option.

configmap.yaml

1apiVersion: v1
2kind: ConfigMap
3metadata:
4  name: nginxconf
5data:
6  default.conf: |
7	  server {
8		  listen 80;
9		
10		  server_name ~^(?<subdomain>.*?)\.;
11		  resolver kube-dns.kube-system.svc.cluster.local valid=5s;
12		
13		  location / {
14		    proxy_pass http://$subdomain.your-namespace.svc.cluster.local;
15		    proxy_set_header Host $host;
16		  }
17		} 
18

deployment

We will create a pretty much standard Kubernetes deployment configuration, however, loading our configmap - nginxconf through volumeMounts & volumes config.

deployment.yaml

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: nginx
5  labels:
6    app: nginx
7spec:
8  selector:
9    matchLabels:
10      app: nginx
11  replicas: 1
12  template:
13    metadata:
14      labels:
15        app: nginx
16    spec:
17      containers:
18        - name: nginx
19          image: nginx:alpine
20          ports:
21          - containerPort: 80
22          volumeMounts:
23            - name: nginx-config
24              mountPath: /etc/nginx/conf.d/default.conf
25              subPath: default.conf
26      volumes:
27        - name: nginx-config
28          configMap:
29            name: nginxconf
30

You may also mount the config at /etc/nginx/nginx.conf , however, in that case you would need to write the full core nginx configuration inside the config map.

service

service.yaml

1kind: Service
2apiVersion: v1
3metadata:
4  name: nginx-svc
5spec:
6  selector:
7    app: nginx
8  ports:
9  - protocol: TCP
10    port: 80
11    targetPort: 80
12    name: nginx
13

ingress

1apiVersion: networking.k8s.io/v1beta1
2kind: Ingress
3metadata:
4  name: ingress-nginx-custom
5  annotations:
6    kubernetes.io/ingress.class: nginx
7spec:
8  rules:
9  - host: '*.jobspage.io'
10    http:
11      paths:
12      - path: /
13        backend:
14          serviceName: nginx-svc
15          servicePort: 80
16

With all this setup, now a client’s instance from a subdomain like airbnb.jobspage.io will resolve to the corresponding service of the instance inside the cluster correctly, and without setting up multiple ingress.

Stay in touch

Be the first to hear when I post new stuff! Instead of filling your inbox with newsletters, I prefer using X to share all my updates and blog posts. If you find my content useful or interesting then please consider following me on X.


If you found this post helpful,
you will love these too.
Django Models: Setting up Many-to-Many Relationships through Intermediate Models
4 min read
10/18/2018
A tutorial on how to setup many-to-many relationships in Django using intermediate models, and how to manage them in the admin interface.