Skip to content

Matt Adam

Tech Blog

Menu
  • Home
  • Home Lab
    • Home Lab
    • Home Lab with NSX-T
  • Kubernetes
    • Openshift
    • K3s
    • Tanzu
      • Tanzu – TKGs (WCP)
      • Tanzu – TKG (TKGm)
    • Avi Kubernetes Operator (AKO)
  • About
  • Privacy Policy
Menu

Raspberry Pi Kubernetes Cluster

Posted on January 10, 2023January 24, 2023 by Matt Adam

Table of Contents

  • Summary
  • Components
  • Build
  • Initial Configuration
  • Install K3s
    • CNIs and LB/Ingress
    • K3s on the Master
    • K3s on the Worker
  • Install MetalLB
  • Testing
  • Accessing the Kubernetes Environment
    • Step 1
    • Step 2
  • Deploy an Ingress and test
  • Uninstall
    • Uninstall K3s on the master node
    • Uninstall K3s on the worker nodes

Summary

I have a few Raspberry Pi’s laying around from a cardano stake pool I used to run. So I decided to use them to build a simple K3s K8s cluster. I’ll likely move my DNS server here, and that’s about it. In any case, here is what i’m doing for the setup.

Components

  • 3 – Raspberry Pi Model B 8GB
  • 3 – SanDisk 32GB Class 10 MicroSD Card
  • 3 – Canakit including the fan, case etc. OPTIONAL

Build

Build the Raspberry Pis, there’s quite a few docs on this already. Then you need an OS to run. Pretty much anything will work fine, but i’m using the default Rasbian. Flash the Rasberry PI OS onto the MicroSD, using the Raspberry Pi Imager.

Be sure to go into the configuration and enable ssh and set a password. Alternatively you can create a file called “ssh” and add that to the boot folder of the MicroSD.

Initial Configuration

Plug one of the PIs in to the power supply, and connect an ethernet cable.

# SSH Into the node and become root
ssh pi@DHCPIP
sudo su

###
### Optional Network and Vlan configuration ###
# If not using vlans, skip this
apt install vlan -y
nano /etc/network/interfaces.d/vlans

# Add the following (3 is the vlanID, so if you want vlan20, use eth0.20)
auto eth0.3
iface eth0.3 inet manual
  vlan-raw-device eth0

# If wanting to use DHCP, skip this
nano /etc/dhcpcd.conf

# Again either eth0 or eth0.X where X is the vlanID
interface eth0.3
static ip_address=192.168.3.80
static routers=192.168.3.1
static domain_name_servers=192.168.3.6

### Optional Network and Vlan configuration ###
###


# Set the hostname
nano /etc/hostname
node01 # I am using hostname "node01"

nano /etc/hosts

# Edit this line:
127.0.1.1               raspberrypi
# With this:
127.0.1.1               node01

# Update all the packages
sudo su
apt install dnsutils -y
apt update && sudo apt full-upgrade -y
reboot

# Install docker and some additional things
sudo su
apt install -y docker.io
sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1/' /boot/cmdline.txt
reboot

Install K3s

This super condensed guide is loosely based off of this guide here. K3s is pretty easy to install. You basically run a command on one of the raspberry pis, and it will configure this server as master. Then you run a “join” command on the workers, and it will automatically install K3s and join the master.

CNIs and LB/Ingress

By default K3s comes with flannel. For my purposes this will be fine. It also comes with Klipper for L4, and Traefik for L7 Ingress.
I’m going to replace the Klipper with MetalLB, so that I can get a single external IP address to load balance to to my K8s nodes.

K3s on the Master

# Install K3s on the master node
### Check out the K3s documentation for all the flags: https://docs.k3s.io/reference/server-config ###

sudo su
# Default install with no options:
curl -sfL https://get.k3s.io | sh -

# Install and specify a node-ip and cluster-cidr
# --disable servicelb is used to disable Klipper
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=" --cluster-cidr=172.16.0.0/16 --node-ip 192.168.3.80 --disable servicelb" sh -

# Get the join token
cat /var/lib/rancher/k3s/server/node-token

K3s on the Worker

# Install K3s on the worker node
### Check out the K3s documentation for all the flags: https://docs.k3s.io/reference/server-config ###

# Default install with no options:
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.3.81:6443 K3S_TOKEN=K....f0::server:617....2f sh -

# Install and specify a node-ip
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip 192.168.3.81" K3S_URL=https://192.168.3.80:6443 K3S_TOKEN=K....f0::server:617....2f sh -

Install MetalLB

After K3s is installed and the nodes are clustered, you can install MetalLB.

# Install Command, be sure to replace the version with the latest version
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

Lastly we need to apply a yaml file to declare the MetalLB configuration. I have an example file below. Set the addresses range to any IPs that you want to use as the load balancer virtual IP address.

---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: mainpool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.3.90-192.168.3.99

---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: dnspool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.3.6/32

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: advertisemetallb
  namespace: metallb-system
# Apply the MetalLB config
kubectl apply -f metallb.yaml

Testing

K3s is pretty great because it’s easy to install. A few commands, and all the binaries are there, and the nodes automatically form a cluster. You can even use the kubectl commands without any further installation.

kubectl get nodes
k3s even has their own binary. I haven’t used it yet though.
MetalLB is up and running, see the pods named speaker-x. You will see 1 pod per node in the k8s cluster.

Accessing the Kubernetes Environment

If you are on the master node, your kubeconfig file should be present, and you should be able to run the above commands. The default kubeconfig location in K3s is /etc/rancher/k3s/k3s.yaml.
If you want to access the K8s environment from a different server, you will need to copy the kubeconfig file.

Step 1

# Run this on the master, and copy the output
sudo cat /etc/rancher/k3s/k3s.yaml

Step 2

# Now on the server in which you want to access the k8s cluster, create the kube folder and create the config file.
mkdir ~/.kube
nano ~/.kube/config

# Paste in the contents from the previous step
# Modify the server.
    server: https://127.0.0.1:6443
# Change to the IP of your master node
    server: https://192.168.9.2:6443

Test the Kubectl commands now, you should see them working.

Deploy an Ingress and test

Here’s a quick example of a hello-world nginx Ingress you can test with.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world-nginx
spec:
  selector:
    matchLabels:
      app: hello-world
  replicas: 3
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80



---
apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  ports:
    - port: 80
      protocol: TCP
  selector:
    app:  hello-world


---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-world
  annotations:
    kubernetes.io/ingress.class: "traefik"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello-world
            port:
              number: 80

Uninstall

Uninstall K3s on the master node

# Run this command
/usr/local/bin/k3s-uninstall.sh

Uninstall K3s on the worker nodes

# Run this command
/usr/local/bin/k3s-agent-uninstall.sh

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Setting up the Kubernetes Dashboard
  • Running a DNS server in K3s
  • Raspberry Pi Kubernetes Cluster
  • Pod Routing: NodePort, ClusterIP, NodePortLocal
  • Configure Bootstrap VM for OpenShift and Install OpenShift with vSphere

About

My name is Matt Adam and I’m a Product Line Manager at VMware.

I support the NSX Advanced Load Balancer (Avi Networks) with a focus on containers and Kubernetes. I have a background in load balancing, automation, development, and public cloud.

© 2023 Matt Adam | Powered by Minimalist Blog WordPress Theme