How to install and configure K3s?
What is k3s?
K3s is a lightweight and certified Kubernetes distribution built by the Rancher. It's currently in the sandbox projects category at the CNCF. K3s is a production-grade distribution of Kubernetes which is in nature lightweight and the foremost reason for building it was the need to use Kubernetes on resource-restrained devices.
K3s is really easy to set up and install. This is the main reason why I am using it for research, testing, and creating proof of concepts.
In the previous post, I wrote an introductory article about traffic engineering using Traefik v2. The underlying infrastructure hosting Traefik and other components was the K3s.
What I've found as a challenge in the start was configuration after the initial install. Anyhow to get straight to the point I'll describe the process of installing the k3s.
To install the K3s, on the new VM instance, you can simply run the script in the terminal:
curl -sfL https://get.k3s.io | sh -
After the installation and initial setup process (which can take a few minutes), you can access the k3s cluster using the kube config file located at the /etc/rancher/k3s/k3s.yaml
.
Configuring installation
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - --flannel-backend none --token 12345
Configuration can be passed to the installer via environment variables.
Configure access to the cluster
First way
To access the cluster copy the k3s.yaml to the ~/.kube/config
.
$ mkdir ~/.kube
$ cp ~/.kube/config ~/.kube/config.bak
$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Second way
After the installation and initial setup process (which can take a few minutes), you can access the k3s cluster using the kube config file located at the /etc/rancher/k3s/k3s.yaml
.
You can easily use this file as a Kube config exporting KUBECONF environment variable.
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
ubuntu@k3s:~$ k get pods -n kube-system
NAME READY STATUS
local-path-provisioner-6c79684f77-zv678 1/1 Running
coredns-d76bd69b-j9b6z 1/1 Running
metrics-server-7cd5fcb6b7-g5ff6 1/1 Running
helm-install-traefik-crd-w7b52 0/1 Completed
helm-install-traefik-tnt4w 0/1 Completed
svclb-traefik-c7zxl 2/2 Running
traefik-df4ff85d6-w8wn4 1/1 Running
/etc/rancher
to be under root ownership so you should set permissions accordingly for the user. For the current setup: sudo chown $USER /etc/rancher/k3s/k3s.yaml
.Architecture
Nodes
➜ ~ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready control-plane,master 18m v1.27.7+k3s1
This setup installs the cluster as a single-node cluster. This means that the master node is also a worker node that runs the pods.
Kube-system namespace
As can be seen, all the critical pods are located in the kube-system
namespace.
Active services in the kube-system
namespace are shown below.
➜ ~ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 16m
metrics-server ClusterIP 10.43.239.114 <none> 443/TCP 16m
traefik LoadBalancer 10.43.84.104 10.8.0.2 80:31664/TCP,443:31701/TCP 14m
The K3s is handling the ingresses for you by installing the Traefik and exposing it as LoadBalancer service. This way ingresses can be added and used immediately on your localhost address.
Hitting the http://localhost
will hit the Traefik service on the K3s cluster. It's up to end user to configure the ingresses on the cluster.
If you are interested in how to configure the Traefik read more in the article shown below.