WebAssembly on Kubernetes fundamentals
WebAssembly on Kubernetes in detail
containerd and runwasi
Bootstrapping Kubernetes workers with Wasm runtimes
For a Kubernetes worker to execute WebAssembly workloads it needs bootstrapping with a Wasm runtime. This is a two-step process:
1. Install the Wasm runtime
2. Register the Wasm runtime with containerd
In most cases, your Kubernetes distro will provide a CLI or UI that automates these steps. However, we’ll explain what’s happening behind the scenes.
Wasm runtimes are binary executables that should be installed on worker nodes in a path that’s visible to containerd. They should also be named according to the containerd binary runtime naming convention. The following list shows the wasmtime and spin runtime binaries named appropriately and installed into the /bin
directory:
– wasmtime: /bin/containerd-shim-wasmtime-v1
– spin: /bin/containerd-shim-spin-v1
Once installed, runtimes need registering with containerd. This is done by adding them to the containerd config file which is usually located at /etc/containerd/config.toml
.
The following extract shows how to register the wasmtime and spin runtimes in the containerd config.toml
file.
[plugins.cri.containerd.runtimes.wasmtime]
runtime_type = "io.containerd.wasmtime.v1"
[plugins.cri.containerd.runtimes.spin]
runtime_type = "io.containerd.spin.v1"
Using labels to target workloads
Be sure to use meaningful labels and avoid reserved namespaces such as kubernetes.io
and k8s.io
.
The following command applies the wasmtime=yes label to a node called wrkr3. RuntimeClasses can use this label to send Wasm workloads to the node.
$ kubectl label nodes wrkr3 wasmtime-enabled=yes
$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
wrkr1 Ready 5d v1.25.1
wrkr2 Ready 5d v1.25.1
wrkr3 Ready 2m v1.25.1 wasmtime-enabled=yes
RuntimeClasses and Wasm
RuntimeClasses allow Kubernetes to schedule Pods to specific nodes and target specific runtimes.
They have three important properties:
– metadata.name
– scheduling.nodeSelector
– handler
The name
is how you tell other objects, such as Pods, to use it. The nodeSelector
tells Kubernetes which nodes to schedule work to. The handler
tells containerd which runtime to use.
The following RuntimeClass is called wasm1. The scheduling.nodeSelector
property sends work to nodes with the wasmtime=yes
label, and the handler
property ensures the wasmtime
runtime will execute the work.
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: "wasm1"
scheduling:
nodeSelector:
wasmtime-enabled: "yes"
handler: "wasmtime"
Wasm apps in Kubernetes Pods
The following YAML snippet targets the Pod at the wasm1
RuntimeClass. This will ensure the Pod gets assigned to a node and runtime specified in the wasm1 RuntimeClass.
In the real world, the Pod template will be embedded inside a higher order object such as a Deployment.
apiVersion: v1
kind: Pod
metadata:
name: wasm-test
spec:
runtimeClassName: wasm1 <<<<==== Use this RuntimeClass
container:
- name: ctr-wasm
image:
...
This Pod can be posted to the Kubernetes API server where it will use the wasm1 RuntimeClass to ensure it executes on the correct node with the correct runtime (handler
).
Notice how the Pod template defines a container
even though it’s deploying a Wasm app. This is because Kubernetes Pods were designed with containers in mind. For Wasm to work on Kubernetes, Wasm apps have to be packaged inside of OCI container images.