Kubernetes Load Balancing with MetalLB
17 January, 2022
Prerequisites
- Kubernetes cluster version ≥ 1.13
- MetalLB compatible CNI-plugin (listed here)
- Helm package manager (Install Helm)
- A domain name
Motivation and objectives
Kubernetes does not provide a network load balancer implementation for bare metal clusters. When you deploy a load balancer service on an IaaS platform such as AWS or Azure, Kubernetes is actually calling out for these platforms to create a new network load balancer, and allocate an IP-address for it (specifics may vary, but this is essentially what happens). MetalLB is on a mission to remedy this issue by providing a generic network load balancer implementation that works with standard networking equipment.
You can read how MetalLB works from the official documentation.
We will be setting up MetalLB with a single IP-address, that will be used by nginx-ingress-controller to serve our example applications to the internet. Here is a high level description of what is going to happen:
- MetalLB is installed, starts scanning for LoadBalancer services in the cluster
- Nginx ingress controller is deployed, MetalLB assigns an IP-address to it
- Two pods are deployed along with their respective services
- Ingress object is deployed, defining routing rules. This is fulfilled by the ingress controller
- MetalLB routes incoming traffic to Nginx ingress controller, which routes it to the backend pods
Install and configure MetalLB
I prefer installing stuff through Helm where possible, because it makes it easier to manage and update deployments later:
# add MetalLB repository
$ helm repo add metallb https://metallb.github.io/metallb
# install MetalLB to it's own Kubernetes namespace
$ install metallb metallb/metallb -n metallb-system --create-namespace
Configure MetalLB via a ConfigMap
object:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- [YOUR-IP-ADDRESS]/32
Let’s dissect the above configuration:
address-pools
- List of address pools MetalLB can useprotocol
- Mode of operation (L2, BGP)addresses
- IP-address pool
A CIDR-block is expected, but we only have a single IP-address, so network mask is /32
Apply the above configuration:
$ kubectl apply -f metallb-config.yaml -n metallb-system
Now check the MetalLB controller logs to make sure the configuration was accepted:
$ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-7cf77c64fb-5dnpw 1/1 Running 0 5d2h
speaker-gw8bw 1/1 Running 0 5d2h
# Check the log for errors
$ kubectl logs controller-7cf77c64fb-5dnpw -n metallb-system
If no errors are present, we can deem the installation a success.
Install Nginx ingress controller
$ helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace
Check the MetalLB logs again to see if the IP-address gets assigned. You should see something like this:
{
"caller":"level.go:63",
"event":"ipAllocated",
"ip":"[YOUR-IP-ADDRESS]",
"level":"info",
"msg":"IP address assigned by controller",
"service":"ingress-nginx/ingress-nginx-controller",
"ts":"2022-01-13T14:53:15.747401192Z"
}
Check the Nginx ingress controller service, it should have an external IP-address address:
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.107.73.55 [YOUR-IP-ADDR] 80:32288/TCP,443:31952/TCP 4d4h
ingress-nginx-controller-admission ClusterIP 10.103.254.100 <none> 443/TCP 4d4h
The last piece to the puzzle is the IngressClass object:
If you used the helm command above to install nginx ingress controller, you already have an IngressClass object deployed.
Regardless, you should quickly read this document to understand why it’s needed.
Deploy example applications
Let’s test the setup in practice by deploying an application to the cluster.
I wrote a demo configuration, so let’s use that:
$ wqet https://gist.githubusercontent.com/Pheebzer/08c4be068b6da7438545b6c26488cda5/raw/animals.yaml
Modify the configs by replacing the hostname
part with your own domain. Purchasing domain names and pointing them at IP-addresses is left as an exercise to the reader.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
namespace: demo-ns
annotations:
spec:
ingressClassName: nginx
rules:
- host: k8s.turboblaster.xyz # <--- Your hostname here
http:
...
Apply the file:
$ kubectl apply -f animals.yaml
It may take a few minutes for the ingress controller to pick up the new ingress we just deployed. You can watch the process with:
$ watch kubectl get ing demo-ingress -n demo-ns
NAME CLASS HOSTS ADDRESS PORTS AGE
demo-ingress nginx [YOURDOMAIN.COM] [IP-ADDRESS] 80 18h
Once the ingress has received it’s external ip, the applications should be ready to use:
// routed to "dogs" application
$ curl -i k8s.turboblaster.xyz/dog
HTTP/1.1 200 OK
Date: Mon, 17 Jan 2022 17:39:15 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 23
Connection: keep-alive
X-App-Name: http-echo
X-App-Version: 0.2.3
"Dogs are the best :)"
---
// routed to "cats" application
$ curl -i k8s.turboblaster.xyz/cat
HTTP/1.1 200 OK
Date: Mon, 17 Jan 2022 17:41:39 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 23
Connection: keep-alive
X-App-Name: http-echo
X-App-Version: 0.2.3
"Cats are the best :)"
In Conclusion
We now have an external network load balancer routing traffic to Nginx ingress controller.
If you want to use multiple ingress controllers, you will need to procure more IP-addresses. Keep in mind that a single Nginx ingress controller can support multiple ingresses, so it’s not necessary to deploy multiple controllers (e.g. one for each application) unless you have a specific use-case that demands it.