Expose Kubernetes service securely to the internet on k3s
Introduction
Kubernetes has revolutionized container orchestration, enabling developers to efficiently manage and scale their applications. Among the various Kubernetes distributions available, k3s has gained popularity due to its lightweight and easy-to-deploy nature.
To install single node k3s cluster is pretty straightforward using k3s provided install script is enough. If you need simple cluster for local development or local testing see the article below.
While deploying a k3s cluster within your local environment is straightforward, exposing it securely to the internet requires careful consideration.
This article will guide you through the process of installing k3s cluster and exposing your Kubernetes cluster workload to the internet via LoadBalancer service while maintaining Kubernetes API in a private network.
This pattern is robust in the security aspect because the Kubernetes API is still in a private network which restricts the attack path to the API of the cluster. Drawback is that cluster API is only accessible inside of the network.
Network
This installation will require that you have a virtual machine that already has a network interface that is exposed to the internet.
$ ifconfig
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet ********* netmask 255.255.255.0 broadcast *********
inet6 ********* prefixlen 64 scopeid 0x0<global>
inet6 ********* prefixlen 64 scopeid 0x20<link>
ether ********* txqueuelen 1000 (Ethernet)
RX packets 51536 bytes 44630374 (44.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 49946 bytes 45883625 (45.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
We are interested in the interface that has the public IP assigned to it. In this case, it is ens3. It can be any interface with the public IP.
We will record inet *********
IPv4 address for later use.
Kubernetes cluster install with k3s
The snippet above will download the k3s tools for us. The cluster is not running yet.
Kubernetes cluster will be isolated on the separate interface. LoadBalancer services will only be exposed to the internet. Kubernetes cluster API will be only accessible from within the private network.
This will create a new network interface of virtual ethernet type.
This will enable the firewall to filter the traffic. IP forwarding is needed when Linux is acting as a router.
Allow 22/ssh and 80/http protocols in the firewall.
Starting the master node:
After booting the cluster the master node is ready as it is shown below.
Restarting node master
After the initial start of the master node if the node needs to be stopped and started again next starting of the cluster is shown below.
sudo k3s server \
--kubelet-arg="cloud-provider=external" \
--cluster-init \
--disable traefik \
--token $TOKEN \
--tls-san 10.8.5.1 \
--node-ip 10.8.5.1 \
--advertise-address 10.8.5.1 \
--bind-address 10.8.5.1 \
--flannel-iface=vethk3s \
--kubelet-arg='address=10.8.5.1' \
--write-kubeconfig-mode 644
Deploy Nginx to the cluster
Applying the manifest and inspect the final state.
$ kubectl apply -f manifest.yaml
$ kubect get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-579b58dcd-pt2c9 1/1 Running 0 6s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 14m
my-service LoadBalancer 10.43.249.240 10.8.5.1 80:30272/TCP 25s
Expose LoadBalancer to the internet
The snippet above will forward the requests from the public interface to the vethk3s interface.
This will preform NAT in the following scenario:
- Packet is coming to the 80 port on the public interface ens3
- NAT the packet to the 10.8.5.1 on the port 30272 (NodePort of the LoadBalancer service)
- Outgoing (from the cluster) packets will be reverted to the public interface ens3
Testing the outside connectivity
The Nginx responded from the Kubernetes cluster to our request. The next step should be to install an ingress controller to handle multiple routing on the application level.
You could use Traefik or the Nginx. Bind the LoadBalancer service to the ingress controller to the ingress controller and you can host multiple applications on your Kubernetes cluster which are accessible from the Internet.
This demo is a simple demonstration of how to expose the Kubernetes cluster workload to the internet.