
WebAssembly on Kubernetes: everything you need to know
Apr 2, 2023
5 min read
0
44
0
This first article explains the components and technologies that enable WebAssembly on Kubernetes.
The next article is hands-on and walks you through deploying a cluster, configuring containerd for Wasm runtimes, creating a RuntimeClass, and deploying a Wasm app.
Both articles focus on Kubernetes clusters running containerd. The WebAssembly integration will be provided by runwasi.
WebAssembly on Kubernetes fundamentals
Kubernetes needs two things to be able to run WebAssembly workloads:
1. Worker nodes bootstrapped with a WebAssembly runtime
2. RuntimeClass objects mapping to nodes with a WebAssembly runtime
The following high-level diagram shows a Kubernetes cluster with two node pools. Nodes in the Wasm node pool on the right are bootstrapped with a WebAssembly runtime. It also shows a Wasm app wrapped in a Kubernetes Deployment YAML file. The YAML file references a RuntimeClass that maps to the nodes in the “Wasm node pool”.
Let’s dig into the detail.
WebAssembly on Kubernetes in detail
We’ll explain all of the following in more detail:
1. Worker node configuration with contianerd and runwasi
2. Bootstrapping Kubernetes workers with Wasm runtimes
3. Using labels to target workloads
4. RuntimeClasses and Wasm
5. Wasm apps in Kubernetes Pods
containerd and runwasi
Most Kubernetes clusters use containerd as the high-level runtime. It runs on every node and manages container lifecycle events such as create, start, stop, and delete. However, containerd only manages these events, it actually uses a low-level container runtime called runc to perform the actual work. A shim process sits between containerd and the low-level runtime and performs important tasks such as abstracting the low-level runtime.
The architecture is shown below.
runwasi is a containerd project that let’s you swap-out container runtimes for WebAssembly runtimes. It operates as a shim layer between containerd and low-level Wasm runtimes and enables WebAssembly workloads to seamlessly run on Kubernetes clusters. The architecture is shown below.
Everything from containerd and below is opaque to Kubernetes — Kubernetes schedules work tasks to nodes and doesn’t care if it’s a traditional OCI container or a WebAssembly workload.
Bootstrapping Kubernetes workers with Wasm runtimes
For a Kubernetes worker to execute WebAssembly workloads it needs bootstrapping with a Wasm runtime. This is a two-step process:
1. Install the Wasm runtime 2. Register the Wasm runtime with containerd
In most cases, your Kubernetes distro will provide a CLI or UI that automates these steps. However, we’ll explain what’s happening behind the scenes.
Wasm runtimes are binary executables that should be installed on worker nodes in a path that’s visible to containerd. They should also be named according to the containerd binary runtime naming convention.
The following list shows the wasmtime and spin runtime binaries named appropriately and installed into the /bin directory:
– wasmtime: /bin/containerd-shim-wasmtime-v1 – spin: /bin/containerd-shim-spin-v1
Once installed, runtimes need registering with containerd. This is done by adding them to the containerd config file which is usually located at /etc/containerd/config.toml.
The following extract shows how to register the wasmtime and spin runtimes in the containerd config.toml file.
[plugins.cri.containerd.runtimes.wasmtime]
runtime_type = "io.containerd.wasmtime.v1"
[plugins.cri.containerd.runtimes.spin]
runtime_type = "io.containerd.spin.v1"
Once the Wasm runtimes are installed and registered, the final node configuration step is to label the nodes.
Using labels to target workloads
If all of your cluster nodes have the same runtimes you do not need to label them. However, if you have sub-sets of nodes with Wasm runtimes, you need to label them so that RuntimeClass objects can target them.
The following diagram shows a cluster with 6 nodes. Two have runc, 4 have wasmtime, and 2 have spin. See how the labels make it obvious which nodes have which runtimes.
Be sure to use meaningful labels and avoid reserved namespaces such as kubernetes.io and k8s.io.
The following command applies the wasmtime=yes label to a node called wrkr3. RuntimeClasses can use this label to send Wasm workloads to the node.
$ kubectl label nodes wrkr3 wasmtime-enabled=yes
The output of the next command shows the label was correctly applied.
$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
wrkr1 Ready <none> 5d v1.25.1
wrkr2 Ready <none> 5d v1.25.1
wrkr3 Ready <none> 2m v1.25.1 wasmtime-enabled=yes
With Wasm runtimes installed, registered with containerd, and labels applied, a node is ready to execute Wasm tasks.
The next step is to create a RuntimeClass that sends Wasm workloads to the node(s).
RuntimeClasses and Wasm
RuntimeClasses allow Kubernetes to schedule Pods to specific nodes and target specific runtimes.
They have three important properties:
– metadata.name – scheduling.nodeSelector – handler
The name is how you tell other objects, such as Pods, to use it. The nodeSelector tells Kubernetes which nodes to schedule work to. The handler tells containerd which runtime to use.
The following RuntimeClass is called wasm1. The scheduling.nodeSelector property sends work to nodes with the wasmtime=yes label, and the handler property ensures the wasmtime runtime will execute the work.
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: "wasm1"
scheduling:
nodeSelector:
wasmtime-enabled: "yes"
handler: "wasmtime"
The following diagram shows three worker nodes. One of them is running the wasmtime and spin runtimes and is labelled appropriately. The Pod and RuntimeClass on the left target work against the wasmtime runtime. The Pod and RuntimeClass on the right target work against the spin runtime.
With worker nodes configured and RuntimeClasses in place, the last thing to do is bind Pods to the correct RuntimeClass.
Wasm apps in Kubernetes Pods
The following YAML snippet targets the Pod at the wasm1 RuntimeClass. This will ensure the Pod gets assigned to a node and runtime specified in the wasm1 RuntimeClass.
In the real world, the Pod template will be embedded inside a higher order object such as a Deployment.
apiVersion: v1
kind: Pod
metadata:
name: wasm-test
spec:
runtimeClassName: wasm1 <<<<==== Use this RuntimeClass
container:
- name: ctr-wasm
image: <OCI image with Wasm module>
...
This Pod can be posted to the Kubernetes API server where it will use the wasm1 RuntimeClass to ensure it executes on the correct node with the correct runtime (handler).
Notice how the Pod template defines a container even though it’s deploying a Wasm app. This is because Kubernetes Pods were designed with containers in mind. For Wasm to work on Kubernetes, Wasm apps have to be packaged inside of OCI container images.
Summary
WebAssembly is driving the third wave of cloud computing, and Kubernetes is evolving ta take advantage.
runwasi is a containerd project that lets you run WebAssembly on Kubernetes. It does this by letting you swap out container runtimes for WebAssembly runtimes. This lets you seamlessly run WebAssembly apps on Kubernetes. You need to bootstrap Wasm runtimes on Kubernetes worker nodes, create RuntimeClasses that map to these nodes, and reference the RuntimeClasses in Pod templates.
The next article is hands-on and will walk you through the following:
– Deploying a Kubernetes cluster with Wasm runtimes
– Inspecting containerd configuration
– Creating RuntimeClasses
– Deploying a Wasm app
Feel free to connect with me:
– Mastodon