rre.NU


May 16, 2021

Use kube-vip for your k8s API on k3s

I’m using kube-vip to get high availability on the kubernetes API on my k3s home setup. This is a short description on how I’ve set it up

Prerequisite

In this example configuration I’m setting up a four node k3s cluster using the embedded etcd to get a high availability control plane on three master/server nodes and just one worker/agent node. I’m also adding an additional IP address to be used for the k8s API and kube-vip will use ARP to announce that IP on one of the master/server nodes. 10.2.0.250 will be the k8s API IP address 10.2.0.251-10.2.0.253 will be used for the three master/server nodes 10.2.0.254 will be used for the worker

I’m using openSUSE Leap 15.2 on all the nodes and the first master (10.2.0.251) needs to have curl installed To make it easier you should configure SSH on all nodes with SSH public key authentication.

In my environment I’m not allowing root to login through SSH so we also need to make sure that you can use SUDO on all servers to follow this guide.

All commands is executed from my workstation and to make that process much easier I use k3sup, that tool is awesome and makes it really fast to deploy and teardown k3s cluster. It also copies the kubeconfig to my local workstation and grabs to TOKEN from cluster when adding new nodes. A shortcut well worth to use.

I’m using bash shell in this example.

If you haven’t added your ssh key to a ssh-agent do that first

eval $(ssh-agent)
ssh-add ~/.ssh/id_ed25519

You also need to set your configuration by setting some environment variables

IP=(10.2.0.250 10.2.0.251 10.2.0.252 10.2.0.253 10.2.0.254)
CONTEXT=demo
SSH_KEY=~/.ssh/id_ed25519
INTERFACE=eth0
K3S_VERSION=v1.20.6+k3s1

My SUDO setup on my machines requires me to enter my password, however k3sup doesn’t support pseudo tty and expects a passwordless configuration to be able to install k3s. So let’s start by adding a temporary SUDO rule on all nodes (we’ll remove that when the cluster is up and running)

for ip in ${IP[@]:1}; do ssh -t $ip "hostname --long; echo \"$USER ALL=(ALL) NOPASSWD: ALL\" | sudo tee /etc/sudoers.d/k3sup"; done

And just check that it works as expected

for ip in ${IP[@]:1}; do ssh $ip "hostname --long;sudo /bin/true"; done

Make sure that the $INTERFACE variable contain the correct interface name (I assume that the interface needs to have the same name on all master/server nodes)

for ip in ${IP[@]:1:3}; do ssh $ip "hostname --long;ip -brief a; echo """; done

Initializing the cluster

We start the installation of the cluster by initializing the cluster on the first master/server node

k3sup install \
  --cluster \
  --ip ${IP[1]} \
  --context=$CONTEXT \
  --k3s-extra-args='--no-deploy servicelb --no-deploy traefik' \
  --sudo=true \
  --user=$USER \
  --tls-san=${IP[0]} \
  --ssh-key=$SSH_KEY \
  --k3s-version=$K3S_VERSION

When the installation has completed you can set the kubeconfig to your KUBECONFIG variable and set the context to the name “demo”

export KUBECONFIG=~/kubeconfig
kubectl config set-context demo

Kube-vip installation

When the first node is up and running we can start to install and configure kube-vip into our 1-node k3s cluster. We want to do that before we join other nodes because we’ll be using the kube-vip IP address when we add additional nodes.

First we need to create the RBAC configuration that kube-vip uses and we’ll be using k3s built in helm CRD, just by dropping the manifests in /var/lib/rancher/k3s/manifests it will be picked up and deployed in the cluster.

ssh ${IP[1]} "curl -s https://kube-vip.io/manifests/rbac.yaml | sudo tee /var/lib/rancher/k3s/server/manifests/kube-vip-rbac.yaml"

We also need to create the kube-vip manifest, following the best practises you should first pull down the kube-vip container image

ssh ${IP[1]} sudo /usr/local/bin/k3s crictl pull docker.io/plndr/kube-vip:v0.3.4

Then we create the kube-vip manifest and save it in the /var/lib/rancher/k3s/manifests directory on our first master/server node.

ssh ${IP[1]} "\
  sudo /usr/local/bin/k3s ctr run --rm --net-host docker.io/plndr/kube-vip:v0.3.4 \
        vip /kube-vip manifest daemonset \
          --arp \
          --interface $INTERFACE \
          --vip ${IP[0]} \
          --controlplane \
          --leaderElection \
          --inCluster \
          --taint \
  | sudo tee /var/lib/rancher/k3s/server/manifests/kube-vip.yaml"

Watch the cluster to see when the kube-vip pod is up and running

watch kubectl get pods -A

Test if you can ping the virtual IP

ping ${IP[0]}

now you can change the IP address in your ~/kubeconfig file to point to the virtual IP instead of the node IP

Joining the second and third master/server

Now that we have a virtual API we will use that IP to join the remaining master/server nodes

k3sup join \
  --ip=${IP[2]} \
  --ssh-key=$SSH_KEY \
  --sudo=true \
  --user=$USER \
  --server-host=${IP[0]} \
  --server-user=$USER \
  --server \
  --k3s-extra-args='--no-deploy servicelb --no-deploy traefik' \
  --k3s-version=$K3S_VERSION

and join the last master/server node to the cluster

k3sup join \
  --ip=${IP[3]} \
  --ssh-key=$SSH_KEY \
  --sudo=true \
  --user=$USER \
  --server-host=${IP[0]} \
  --server-user=$USER \
  --server \
  --k3s-extra-args='--no-deploy servicelb --no-deploy traefik' \
  --k3s-version=$K3S_VERSION

Join the worker(s)

To add the workers is done by just omitting the --server and --k3s-extra-args option from the k3sup command (and specifying the correct IP for the node)

k3sup join \
  --ip=${IP[4]} \
  --ssh-key=$SSH_KEY \
  --sudo=true \
  --user=$USER \
  --server-host=${IP[0]} \
  --server-user=$USER \
  --k3s-version=$K3S_VERSION

If you take a look at the daemonset you’ll notice that we only have 3 DESIRED for the kube-vip-ds although we have four nodes, that’s because we used the --taint option when we created the daemonset manifest and it applied a node selector to match master nodes

kubectl get ds -A
NAMESPACE     NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                         AGE
kube-system   kube-vip-ds   3         3         3       3            3           node-role.kubernetes.io/master=true   10m

Cleaning up

Now that the cluster is up and running I’ll just clean up the temporary passwordless SUDO k3sup entry I made in the beginning.

for ip in ${IP[@]:1}; do ssh $ip sudo rm /etc/sudoers.d/k3sup; done


Meta