To install single node k3s cluster is pretty straightforward using k3s provided install script is enough. If you need simple cluster for local development or local testing see the article below.
Install k3s one-liner
To install the K3s, on the new VM instance, you can simply run the script in the terminal:
curl -sfL https://get.k3s.io | sh -
After the installation and initial setup process (which can take a few minutes), you can access the k3s cluster using the kube config file located at the /etc/rancher/k3s/k3s.yaml.
Expose k3s service
The nodePort is defined to use port 30272 which is in fact the port k3s configured in our NAT to forward to nginx service.
Applying the manifest and inspect the final state.
$ kubectl apply -f manifest.yaml
$ kubect get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-579b58dcd-pt2c9 1/1 Running 0 6s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 14m
my-service LoadBalancer 10.43.249.240 10.8.0.2 8080:30272/TCP 25s
After waiting period my-service will converge the EXTERNAL-IP from pending -> 10.8.0.2 in this case. The LoadBalancer EXTERNAL-IP can vary from case to case.
Testing the outside connectivity
The Nginx responded from the Kubernetes cluster to our request. The next step should be to install an ingress controller to handle multiple routing on the application level.
You could use Traefik or the Nginx. Bind the LoadBalancer service to the ingress controller to the ingress controller and you can host multiple applications on your Kubernetes cluster which are accessible from the Internet.
This demo is a simple demonstration of how to expose the Kubernetes cluster workload to the outside of the cluster.
Subscribe to qdnqn
Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.