In this tutorial, I will show you how to set up a lightweight Kubernetes cluster using rancher k3s.
My current Raspberry Pi 4 configuration:
Hostname | RAM | CPU | Disk | IP Address |
---|---|---|---|---|
k3s-master-1 | 8 GB | 4 | 64 GB | 192.168.1.10 |
k3s-worker-node-1 | 8 GB | 4 | 64 GB | 192.168.1.11 |
k3s-worker-node-2 | 8 GB | 4 | 64 GB | 192.168.1.12 |
Prerequisite
- Ubuntu OS 20.04. Follow guide on here.
- Assign static IP address to PI.
sudo vi /etc/netplan/50-cloud-init.yaml
set values as shown below:
dhcp
set value fromtrue
tono
.- As shown below added 5-9 lines below
dhcp
, change IP address as per your network setup and save the file.
network:
ethernets:
eth0:
dhcp4: no
addresses:
- 192.168.1.10/24
gateway4: 192.168.1.1
nameservers:
addresses: [192.168.1.1, 8.8.8.8, 1.1.1.1]
match:
driver: bcmgenet smsc95xx lan78xx
optional: true
set-name: eth0
version: 2
Apply changes by executing the below command to set static IP:
sudo netplan apply
NOTE: Once you apply changes, your ssh session will be interrupted. Make sure to reconnect using ssh.
- Open
sudo vi /boot/firmware/cmdline.txt
in edit mode and enter below value and apply this same change on each master and work nodes:
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
Mine looks like below.
net.ifnames=0 dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
- reboot
sudo init 6
Getting started with HA setup for Master nodes
Before we proceed with the tutorial, please make sure you have a MySQL database configured. We will use MySQL to set up HA.
On each master node, execute the below command to set up k3s cluster.
curl -sfL https://get.k3s.io | sh -s - server --datastore-endpoint="mysql://username:password@tcp(192.168.1.50:3306)/kdb"
Once the installation is completed, execute the below command to see the status of the k3s
service.
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2020-10-25 04:38:00 UTC; 32s ago
Docs: https://k3s.io
Process: 1696 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 1720 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 1725 (k3s-server)
Tasks: 13
Memory: 425.1M
CGroup: /system.slice/k3s.service
├─1725 /usr/local/bin/k3s server --datastore-endpoint=mysql://username:password@tcp(192.168.1.50:3306)/kdb
└─1865 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd -->
Joining worker nodes to k3s cluster
- First, we need to get a token from the master node. Go to your master node and enter the below command to get token:
sudo cat /var/lib/rancher/k3s/server/node-token
- Second, we are ready to join all the nodes to the cluster by entering the below command and replacing the XXX token with your own:
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.10:6443 K3S_TOKEN=XXX sh -
- Check the status of your worker node status from master node by executing the below command:
sudo kubectl get nodes
ubuntu@k3s-master-1:~$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-worker-node-2 Ready <none> 28m v1.18.9+k3s1
k3s-master-1 Ready master 9h v1.18.9+k3s1
k3s-worker-node-1 Ready <none> 32m v1.18.9+k3s1
k3s-master-2 Ready master 9h v1.18.9+k3s1