Kubernetes cluster setup: Install and expose k3s to the internet

Introduction

Kubernetes has revolutionized container orchestration, enabling developers to efficiently manage and scale their applications. Among the various Kubernetes distributions available, k3s has gained popularity due to its lightweight and easy-to-deploy nature.

While deploying a k3s cluster within your local environment is straightforward, exposing it securely to the internet requires careful considerations.

This article will guide you through the process of exposing your Kubernetes k3s cluster to the internet while maintaining a robust security posture.

Requirements

Hardware

Hardware requirements should be based on your workload. Minimum recommendations are shown in the table.

SpecMinimumRecommended
CPU
1 core
2 cores
RAM
512 MB
1 GB

Network

This installation will require that you have virtual machine which is already having network interface that is exposed to the internet.

$ ifconfig
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet *********  netmask 255.255.255.0  broadcast *********
        inet6 *********  prefixlen 64  scopeid 0x0<global>
        inet6 *********  prefixlen 64  scopeid 0x20<link>
        ether *********  txqueuelen 1000  (Ethernet)
        RX packets 51536  bytes 44630374 (44.6 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 49946  bytes 45883625 (45.8 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

We are interested in the interface that have the public IP assigned to it. In this case it is ens3. It can be any interface with the public IP.

We will record inet ********* IPv4 address for later use.

Kubernetes cluster setup with k3s

TOKEN=$(tr -dc A-Za-z0-9 </dev/urandom | head -c 64 ; echo '')
echo "Generated token is: $TOKEN"
echo "Be sure to save it!"
curl -Lo /usr/local/bin/k3s https://github.com/k3s-io/k3s/releases/download/v1.26.5+k3s1/k3s; sudo chmod a+x /usr/local/bin/k3s

Install k3s

The snippet above will install k3s tools for us. The cluster is not running yet.

Let's prepare network for our kubernetes cluster. We will isolate the kubernetes cluster on the separate interface and expose only the LoadBalancer service to the internet and the Kubernetes Cluster IP will be available inside the kubernetes cluster.

$ sudo ip link add vethk3s0 type veth peer name vethk3s
$ sudo ip addr add 10.8.5.1/24 dev vethk3s
$ sudo ip link set vethk3s up
Create a virtual ethernet interface

This will create new network interface of virtual ethernet type.

$ sudo ufw enable
$ echo "1" > /proc/sys/net/ipv4/ip_forward
Enable firewall and enable IP forwarding

We want firewall enabled to filter the traffic. IP forwarding is needed when Linux is acting as a router.  

We need to allow ssh traffic and HTTP traffic through firewall.

$ sudo ufw default reject incoming
$ sudo ufw allow 80/tcp
$ sudo ufw allow 22
Reject all incoming requests - allow only 80 and ssh

Now we are going to run the k3s cluster which will listen on the vethk3s interface for the incoming requests.

$ sudo k3s server \
--kubelet-arg="cloud-provider=external" \
--cluster-init \
--disable traefik \
--token $TOKEN \
--etcd-arg '--client-cert-allowed-hostname 10.8.5.1' \
--etcd-arg '--initial-advertise-peer-urls=https://10.8.5.1:2380' \
--etcd-arg '--listen-peer-urls=https://127.0.0.1:2380,https://10.8.5.1:2380' \
--etcd-arg '--listen-metrics-urls=https://127.0.0.1:2381' \
--etcd-arg '--advertise-client-urls=http://10.8.5.1:2379' \
--etcd-arg '--listen-client-urls=https://127.0.0.1:2379,https://10.8.5.1:2379' \
--tls-san 10.8.5.1 \
--node-ip 10.8.5.1 \
--advertise-address 10.8.5.1 \
--bind-address 10.8.5.1 \
--flannel-iface=vethk3s \
--write-kubeconfig-mode 644
Run single-node k3s master node

Master node is ready as it is shown below.

$ kubectl get nodes
NAME      STATUS   ROLES                       AGE     VERSION
qdnqn-0   Ready    control-plane,etcd,master   4m23s   v1.26.5+k3s1
Show current node status

Expose k3s to the internet

$ sudo iptables -A PREROUTING -t nat -i ens3 -p tcp --dport 80 -j DNAT --to-destination 10.8.5.1:30272
$ sudo iptables -A FORWARD -p tcp -d 10.8.5.1 --dport 30272 -j ACCEPT
$ sudo iptables -A POSTROUTING -t nat -s 10.8.5.1 -o ens3 -j MASQUERADE
NAT the requests from the public interface to the vethk3s

Snippet above will forward the requests from the public interface to the vethk3s interface, more precisely to the 10.8.5.1 IP - hence k3s will get the requests.

Expose Nginx to the internet

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: nginx
  ports:
    - protocol: TCP
      port: 80
      nodePort: 30272
      targetPort: 80
  type: LoadBalancer
Nginx and LoadBalancer service which will expose the Nginx to the internet
💡
The nodePort is defined to use the port 30272 which is in fact the port we configured in our NAT in the previous snippet.

Applying the manifest and inspect the final state.

$ kubectl apply -f manifest.yaml
$ kubect get pods
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-579b58dcd-pt2c9   1/1     Running   0          6s
$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.43.0.1       <none>        443/TCP        14m
my-service   LoadBalancer   10.43.249.240   10.8.5.1      80:30272/TCP   25s

Testing the outside connectivity from the external machine.

$ PUBLIC_IP="{YOUR PUBLIC IP HERE}" 
$ curl http://{$PUBLIC_IP}/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
External machine ping

The Nginx responded from the Kubernetes cluster to our request. The next step should be to install ingress controller to handle multiple routing on the application level.

You could use Traefik or the Nginx. Bound the LoadBalancer service to the ingress controller to the ingress controller and you can host multiple applications on your Kubernetes cluster which are accessible from the Internet.

How to configure Traefik on k3s?
Traffic engineering with Traefik on k3s distribution of KubernetesTraefik is one of the most popular ingress controllers on Kubernetes. Traefik v2 brought some major changes in the usage of the controller itself. It brought the approach of heavy usage of Custom Resources on Kubernetes to provide rec…

This demo is simple demonstration how to expose the Kubernetes cluster to the internet. One should take the points below into consideration before running production ready cluster.

Considerations

Step 1: Evaluate Security Risks Before exposing your k3s cluster to the internet, it's crucial to assess the associated security risks. Exposing your cluster can make it vulnerable to potential attacks if not properly secured. Consider the following factors:

  1. Authentication and Authorization: Ensure that only authorized users or systems can access your cluster by implementing strong authentication mechanisms such as RBAC (Role-Based Access Control) and user management.
  2. Network Isolation: Separate your cluster from other sensitive systems and data by placing it within a demilitarized zone (DMZ) or using network segmentation techniques like virtual private networks (VPNs).
  3. Encryption: Encrypt the traffic between your cluster and external entities using Transport Layer Security (TLS) certificates and secure communication protocols.
  4. Monitoring and Logging: Implement robust monitoring and logging solutions to detect and respond to any potential security incidents in real-time.

Step 2: Configure Load Balancer and Ingress To expose your k3s cluster to the internet, you need to set up a load balancer and an ingress controller. These components will handle routing incoming traffic to the appropriate services within your cluster. Follow these steps:

  1. Choose a Load Balancer: Select a load balancer that suits your needs. Popular options include NGINX, HAProxy, and Traefik. Configure it to listen for incoming traffic from the internet.
  2. Set Up an Ingress Controller: Install an ingress controller such as Traefik or NGINX Ingress. This controller will act as the entry point to your cluster, managing routing rules and SSL termination.
  3. Define Ingress Rules: Create ingress rules to define how incoming traffic should be routed to specific services within your cluster. Ensure that the ingress rules are configured securely, allowing only necessary access.

Step 3: Secure External Access Now that your load balancer and ingress controller are set up, you need to secure external access to your k3s cluster. Implement the following security measures:

  1. TLS Termination: Configure your ingress controller to terminate SSL/TLS connections. Obtain a trusted SSL/TLS certificate from a reputable certificate authority (CA) and enable HTTPS to encrypt traffic between clients and your cluster.
  2. Rate Limiting and WAF: Implement rate limiting and web application firewall (WAF) mechanisms to protect your cluster from potential DDoS attacks and malicious traffic.
  3. IP Whitelisting: Allow access to your cluster only from trusted IP addresses or IP ranges by configuring appropriate firewall rules.
  4. Strong Authentication: Enforce strong authentication mechanisms such as client certificates or OAuth to ensure that only authorized users can access your cluster.

Step 4: Continuous Security Monitoring Once your k3s cluster is exposed to the internet, it's essential to continuously monitor its security posture. Implement the following practices:

  1. Log Analysis: Centralize and analyze logs from your cluster components to detect any suspicious activities or security incidents.
  2. Intrusion Detection System (IDS): Deploy an IDS that can monitor network traffic and detect potential intrusion attempts.
  3. Vulnerability Scanning: Regularly scan your cluster for known vulnerabilities using tools like Trivy or Clair. Stay up to date with security patches and promptly address any identified issues

Subscribe to qdnqn

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
qdnqn@example.com
Subscribe
Join other 14 members. Unsubscribe whenever you want.