k3s
I’ve been working with Kubernetes a lot at work and it’s very interesting. Recently I heard of k3s by the people over at Rancher. It’s introduced as a “lightweight kubernetes”, the binary is less than 40MB, and only 512MB of ram to run. I’ve been wanting to test them out but my last Raspberry Pi 3 just died a few weeks ago. That’s where GCP comes in.
GCP always free tier
Google Cloud platform offers an always free tier and is one of the few cloud providers that offers compute (or VM) for free. Sure it’s a tiny f1-micro, (1vCPU, and 0.6GB of memory) but that should work for k3s. Lets try it out:
Setting up the VM
If you don’t have an account already go to https://cloud.google.com, create one and sign in. Browse to “Compute Engine” and click “Create Instance”, name your VM something, choose “micro” for Machine Type. Pick a distro like CentOS, Debian, Or Ubuntu (I went with Ubuntu 18.04 minimal) and hit create.
Once you have the external IP of the VM ssh into it and run curl -sfL https://get.k3s.io | sh -
.
It’s always best practice to look at the script to make sure you aren’t running something
malicious. Once the script is done running you can run sudo k3s kubectl get node
and
you’ll see something like this:
NAME STATUS ROLES AGE VERSION
k3 Ready master 31m v1.14.3-k3s.1
Woohoo, you have a very small k8 (or k3) running for free in the “cloud”.
Accessing the k3 over the internet
Now that you have a kubernetes envionment it would be super helpful if you could get to it from the outside. One method you could try is setup a VPN and VPN into the network, then make sure the VM had the correct firewall rules. You could try to open the firewall up on port 6443 to the internet and connect that way. At first I did try that method but was met with this error:
unable to connect to the server: x509: certificate is valid for 127.0.0.1, 10.138.0.4 not <redacted external IP>
So it looks like the k3 wasn’t setup with external access in mind.
Setup your kubeconfig
Before we continue we need to setup our local kubeconfig. We can easily do this by running these commands 1:
$ mkdir -p ~/.kube/config
$ ssh -i ~/.ssh/gcp chrisonline1991@<external_ip> "sudo cat /etc/rancher/k3s/k3s.yaml" > ~/.kube/config
ssh tunnel
Then I figured the best way to test this out with an added bonus of being more secure than going over the internet is an ssh tunnel.
The way an ssh tunnel works is simple openning an ssh connection to a server and having the ssh program listen on a specified port, and send that traffic through ssh to another port on the other side of the tunnel. So here is the command I used:
$ ssh -i ~/.ssh/gcp cwiggs@<external_IP> -L 6443:localhost:6443
This command is pretty basic, it sshes you into a server with the ~/.ssh/gcp
ssh key.
This command also uses a -L
switch which just tells ssh to listen on 6443 and forward
that port to the <external_IP>
on port 6443. You might want to change the first 6443
if you are running something like minikube or k3s locally, otherwise this ssh command
might fail.
Run kubectl and test it all out
Now that we have the tunnel setup lets test it out by running kubectl get all
, you
should see an output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 14m
Woot, we now have a kubernetes server we can use to deploy to with kubectl over an ssh tunnel.
-
I used ssh instead of scp because the file requires sudo. ↩︎