Build a K3d Kubernetes cluster
$ k3d cluster create wasm-cluster \
--image ghcr.io/deislabs/containerd-wasm-shims/examples/k3d:v0.5.1 \
-p "8081:80@loadbalancer" --agents 2
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-wasm-cluster-server-0 Ready control-plane,master 2m v1.24.6+k3s1
k3d-wasm-cluster-agent-0 Ready 2m v1.24.6+k3s1
k3d-wasm-cluster-agent-1 Ready 2m v1.24.6+k3s1
Verify the runtime configuration
Use docker exec
to log on to the k3d-wasm-cluster-agent-0 node.
$ docker exec -it k3d-wasm-cluster-agent-0 ash
Run all of the following commands from inside the exec session.
Check the /bin
directory for containerd shims named according to the containerd shim naming convention.
$ ls /bin | grep containerd-
containerd-shim-runc-v2
containerd-shim-slight-v1
containerd-shim-spin-v1
You can see the runc, slight, and spin shims. runc is the default low-level runtime for running containers on Kubernetes and is present on all worker nodes running containerd. spin and slight are Wasm runtimes for running WebAssembly apps on Kubernetes.
With the shims installed correctly, run a ps
command to verify containerd is running.
# ps
PID USER COMMAND
58 0 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml...
The output is trimmed, but it shows the containerd
process is running. The -c
flag is used to pass containerd a custom location for the config.toml
file.
List the contents of this config.toml
to see the registered runtimes.
$ cat /var/lib/rancher/k3s/agent/etc/containerd/config.toml
[plugins.cri.containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins.cri.containerd.runtimes.spin]
runtime_type = "io.containerd.spin.v1"
[plugins.cri.containerd.runtimes.slight]
runtime_type = "io.containerd.slight.v1"
Configure node labels
Run the following command to check existing node labels. If you’re still logged in to one of the cluster nodes you’ll need to type exit
to return to your terminal.
$ kubectl get nodes --show-labels
NAME STATUS ROLES VERSION LABELS
k3d-wasm-cluster-server-0 Ready control-plane,master v1.24.6+k3s1 beta.kubernetes.io...
k3d-wasm-cluster-agent-0 Ready v1.24.6+k3s1 beta.kubernetes.io...
k3d-wasm-cluster-agent-1 Ready v1.24.6+k3s1 beta.kubernetes.io...

We’ll add a custom label to a single worker node and use it in a future step to force Wasm apps onto just that node.
Run the following command to add the spin=yes
label to the k3d-wasm-cluster-agent-0 worker node.
$ kubectl label nodes k3d-wasm-cluster-agent-0 spin=yes
$ kubectl get nodes --show-labels | grep spin
NAME STATUS ROLES ... LABELS
k3d-wasm-cluster-agent-0 Ready ... beta.kubernetes..., spin=yes
At this point, k3d-wasm-cluster-agent-0 is the only node with the spin=yes
label. In the next step, you’ll create a RuntimeClass that targets this node.
Create a RuntimeClass
The following YAML defines a RuntimeClass called spin-test. It selects on nodes with the spin=yes
label and specifies the spin
runtime as the handler.
Copy and paste the whole block into your terminal to deploy it.
kubectl apply -f - <
$ kubectl get runtimeclass
NAME HANDLER AGE
spin-test spin 10s
At this point, you have a 3-node Kubernetes cluster and all three nodes have the spin runtime installed. You also have a RuntimeClass that can be used to schedule tasks against the k3d-wasm-cluster-agent-0 node. This means you’re ready to run WebAssembly apps on Kubernetes!
In the next step, you’ll deploy a Kubernetes app.
Deploy an app
The following YAML snippet is from the app you’re about to deploy. The only bit we’re interested in is the spec.template.spec.runtimeClassName = spin-test
field. This tells Kubernetes to use the spin-test RuntimeClass you created in the previous step. This will schedule the app to the correct node and ensure it executes with the appropriate handler (runtime).
apiVersion: apps/v1
kind: Deployment
metadata:
name: wasm-spin
spec:
replicas: 1
...
template:
...
spec:
runtimeClassName: wasmtime-spin <<==== Targets the RuntimeClass
containers:
$ kubectl apply \
-f https://raw.githubusercontent.com/nigelpoulton/spin1/main/app.yml
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
wasm-spin 1/1 1 1 14s
$ kubectl get pods -o wide
NAME READY STATUS AGE NODE
wasm-spin-74cff79dcb-npwwj 1/1 Running 86s k3d-wasm-cluster-agent-0
It’s running on the k3d-wasm-cluster-agent-0 worker node that has the label and handler specified in the RuntimeClass.
Test the app is working by pointing your browser to http://localhost:8081/spin/hello or by running the following curl
command.
$ curl -v http://127.0.0.1:8081/spin/hello
Hello world from Spin!
Scale the app
$ kubectl scale --replicas 3 deploy/wasm-spin
deployment.apps/wasm-spin scaled
$ kubectl get pods -o wide
NAME READY STATUS AGE NODE
wasm-spin-74cff79dcb-npwwj 1/1 Running 3m32s k3d-wasm-cluster-agent-0
wasm-spin-74cff79dcb-vsz7t 1/1 Running 7s k3d-wasm-cluster-agent-0
wasm-spin-74cff79dcb-q4vxr 1/1 Running 7s k3d-wasm-cluster-agent-0
The RuntimeClass is doing its job of ensuring the Wasm workloads running on the correct node.
Next up, you’ll ensure they’re executing with the spin runtime and inspect the containerd processes.
Inspect the containerd processes
Exec onto the k3d-wasm-cluster-agent-0 node.
$ docker exec -it k3d-wasm-cluster-agent-0 ash
# ps | grep spin
PID USER COMMAND
1764 0 {containerd-shim}.../bin/containerd-shim-spin-v1 -namespace k8s.io -id ...
2015 0 {containerd-shim}.../bin/containerd-shim-spin-v1 -namespace k8s.io -id ...
2017 0 {containerd-shim}.../bin/containerd-shim-spin-v1 -namespace k8s.io -id ...
The output is trimmed, but you can see three containerd-shim-spin-v1
processes. This is one shim process for each of the three replicas.
The long hex ID attached to each of the three shim processes is the ID of the associated container task. This is because containerd runs each Wasm task inside its own container.
Run the following command to list containers on the host. Notice how some of the container task IDs match with the hex IDs associated with the spin shim processes from the previous command. The PID also matches the PID of the spin shim processes.
$ ctr task ls
TASK PID STATUS
3f083847f6818c3f76ff0e9927b3a81f84f4bf1415a32e09f2a37ed2a528aed1 2015 RUNNING
f8166727d7d10220e55aa82d6185a0c7b9b7e66a4db77cc5ca4973f1c8909f85 2017 RUNNING
78c8b0b17213d895f4758288500dc4e1e88d7aa7181fe6b9d69268dffafbd95b 1764 RUNNING
The output is trimmed to only show the Wasm containers.
You can see more detailed info with the info command.
$ ctr containers info 3f083847f6...a37ed2a528aed1
{
"Runtime": {
"Name": "io.containerd.spin.v1",
"annotations": {
"io.kubernetes.cri.container-name": "spin-hello",
"io.kubernetes.cri.container-type": "container",
"io.kubernetes.cri.image-name": "ghcr.io/deislabs/containerd-wasm-shims/examples/spin-rust-hello:v0.5.1",
"io.kubernetes.cri.sandbox-id": "3f083847f6818c3f76ff0e9927b3a81f84f4bf1415a32e09f2a37ed2a528aed1",
"io.kubernetes.cri.sandbox-name": "wasm-spin-5bd4bd7b9-kqgjt",
"io.kubernetes.cri.sandbox-namespace": "default"
Clean up
$ kubectl delete \
-f https://raw.githubusercontent.com/nigelpoulton/spin1/main/app.yml
$ k3d cluster delete wasm-cluster