Word on the Cloud: Keeping you up-to-date on cloud native. Short & sharp!

nigelpoulton_logo_22_colour

WebAssembly on Kubernetes: The ultimate hands-on guide

WebAssembly running on Kubernetes
This is the second article in a two-part series covering everything you need to know about running WebAssembly apps on Kubernetes using containerd and runwasi
The previous article covered the concepts. This article is hands-on and focusses on Kubernetes clusters running containerd.
 
You’ll need Docker Desktop and K3d to follow along, and the remainder of the article assumes you have these installed.
 
You’ll complete the following steps:
 
1. Build a K3d cluster
2. Verify the runtime configuration
3. Configure node labels
4. Create a RuntimeClass
5. Deploy an app
6. Test the app
7. Scale the app
8. Inspect the containerd processes

Build a K3d Kubernetes cluster

Run the following command to create a 3-node K3d Kubernetes cluster. You’ll need Docker Desktop installed and running, but you do not need the Docker Desktop Kubernetes cluster running.
				
					$ k3d cluster create wasm-cluster \
  --image ghcr.io/deislabs/containerd-wasm-shims/examples/k3d:v0.5.1 \
  -p "8081:80@loadbalancer" --agents 2
				
			
Verify the cluster is up and running.
				
					$ kubectl get nodes

NAME                        STATUS   ROLES                  AGE   VERSION
k3d-wasm-cluster-server-0   Ready    control-plane,master   2m    v1.24.6+k3s1
k3d-wasm-cluster-agent-0    Ready    <none>                 2m    v1.24.6+k3s1
k3d-wasm-cluster-agent-1    Ready    <none>                 2m    v1.24.6+k3s1
				
			
The output shows a 3-node Kubernetes cluster with a single control plane node and two worker nodes.
 
The command to build the cluster used an image with the spin and slight Wasm runtimes pre-installed. These are vital to running WebAssembly apps on Kubernetes.
 
The next step will verify the runtime configuration.

Verify the runtime configuration

Use docker exec to log on to the k3d-wasm-cluster-agent-0 node.

				
					$ docker exec -it k3d-wasm-cluster-agent-0 ash
				
			

Run all of the following commands from inside the exec session.

Check the /bin directory for containerd shims named according to the containerd shim naming convention.

				
					$ ls /bin | grep containerd-

containerd-shim-runc-v2
containerd-shim-slight-v1
containerd-shim-spin-v1
				
			

You can see the runc, slight, and spin shims. runc is the default low-level runtime for running containers on Kubernetes and is present on all worker nodes running containerd. spin and slight are Wasm runtimes for running WebAssembly apps on Kubernetes. 

With the shims installed correctly, run a ps command to verify containerd is running.

				
					# ps
PID     USER     COMMAND
<Snip>
   58   0        containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml...
				
			

The output is trimmed, but it shows the containerd process is running. The -c flag is used to pass containerd a custom location for the config.toml file.

List the contents of this config.toml to see the registered runtimes.

				
					$ cat /var/lib/rancher/k3s/agent/etc/containerd/config.toml

<Snip>
[plugins.cri.containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"

[plugins.cri.containerd.runtimes.spin]
  runtime_type = "io.containerd.spin.v1"

[plugins.cri.containerd.runtimes.slight]
  runtime_type = "io.containerd.slight.v1"
				
			
runc, spin, and slight are all installed and registered with containerd.
 
You can repeat these steps for the other two nodes and will get similar results as all three are running containerd and are configured with the spin and slight Wasm runtimes.
 
The next step is to label your nodes.

Configure node labels

Run the following command to check existing node labels. If you’re still logged in to one of the cluster nodes you’ll need to type exit to return to your terminal.

				
					$ kubectl get nodes --show-labels

NAME                        STATUS   ROLES                  VERSION        LABELS
k3d-wasm-cluster-server-0   Ready    control-plane,master   v1.24.6+k3s1   beta.kubernetes.io...
k3d-wasm-cluster-agent-0    Ready    <none>                 v1.24.6+k3s1   beta.kubernetes.io...
k3d-wasm-cluster-agent-1    Ready    <none>                 v1.24.6+k3s1   beta.kubernetes.io...
				
			
The nodes have a lot more labels than shown above and it can be hard to read them. However, if you look closely at your output, you’ll see that none of them have any labels indicating they are capable of running Wasm workloads. This is OK in scenarios like this where every node has the same runtimes installed. However, in environments where only a sub-set of nodes have a Wasm runtime you’ll need to label those nodes.
 
The diagram below shows a cluster where the nodes with Wasm runtimes are labeled.
Kubernetes worker nodes with WebAssembly runtimes

We’ll add a custom label to a single worker node and use it in a future step to force Wasm apps onto just that node.

Run the following command to add the spin=yes label to the k3d-wasm-cluster-agent-0 worker node.

				
					$ kubectl label nodes k3d-wasm-cluster-agent-0 spin=yes
				
			
Verify the operation. Your output will be longer, but only the k3d-wasm-cluster-agent-0 should be displayed.
				
					$ kubectl get nodes --show-labels | grep spin

NAME                        STATUS   ROLES     ...  LABELS
k3d-wasm-cluster-agent-0    Ready    <none>    ...  beta.kubernetes..., spin=yes
				
			

At this point, k3d-wasm-cluster-agent-0 is the only node with the spin=yes label. In the next step, you’ll create a RuntimeClass that targets this node.

Create a RuntimeClass

The following YAML defines a RuntimeClass called spin-test. It selects on nodes with the spin=yes label and specifies the spin runtime as the handler. 

Copy and paste the whole block into your terminal to deploy it.

				
					kubectl apply -f - <<EOF
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: spin-test
handler: spin
scheduling:
  nodeSelector:
    spin: "yes"
EOF
				
			
The following command verifies the RuntimeClass was created and is available.
				
					$ kubectl get runtimeclass

NAME         HANDLER   AGE
spin-test    spin      10s
				
			

At this point, you have a 3-node Kubernetes cluster and all three nodes have the spin runtime installed. You also have a RuntimeClass that can be used to schedule tasks against the k3d-wasm-cluster-agent-0 node. This means you’re ready to run WebAssembly apps on Kubernetes!

In the next step, you’ll deploy a Kubernetes app.

Deploy an app

The following YAML snippet is from the app you’re about to deploy. The only bit we’re interested in is the spec.template.spec.runtimeClassName = spin-test field. This tells Kubernetes to use the spin-test RuntimeClass you created in the previous step. This will schedule the app to the correct node and ensure it executes with the appropriate handler (runtime).

				
					apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasm-spin
spec:
  replicas: 1
  ...
  template:
    ...
    spec:
      runtimeClassName: wasmtime-spin     <<==== Targets the RuntimeClass
      containers:
				
			
Deploy it with the following command.
				
					$ kubectl apply \
  -f https://raw.githubusercontent.com/nigelpoulton/spin1/main/app.yml
				
			
Verify the app was deployed. It might take a few seconds for it to enter the ready state and it will only work if you followed all previous steps.
				
					$ kubectl get deploy

NAME          READY   UP-TO-DATE   AVAILABLE   AGE
wasm-spin     1/1     1            1           14s
				
			
Verify it’s running on the correct node.
				
					$ kubectl get pods -o wide

NAME                         READY   STATUS    AGE   NODE                    
wasm-spin-74cff79dcb-npwwj   1/1     Running   86s   k3d-wasm-cluster-agent-0
				
			

It’s running on the k3d-wasm-cluster-agent-0 worker node that has the label and handler specified in the RuntimeClass.

Test the app is working by pointing your browser to http://localhost:8081/spin/hello or by running the following curl command.

				
					$ curl -v http://127.0.0.1:8081/spin/hello

<Snip>
Hello world from Spin!
				
			
Congratulations, the application is successfully deployed to the worker node specified in the RuntimeClass.
 
In the next step, you’ll scale the app to prove that all replicas get scheduled to the same node.

Scale the app

Increase the number of replicas from 1 to 3.
				
					$ kubectl scale --replicas 3 deploy/wasm-spin

deployment.apps/wasm-spin scaled
				
			
Verify all three Pods are running on the k3d-wasm-cluster-agent-0 node.
				
					$ kubectl get pods -o wide

NAME                         READY   STATUS     AGE     NODE                    
wasm-spin-74cff79dcb-npwwj   1/1     Running    3m32s   k3d-wasm-cluster-agent-0
wasm-spin-74cff79dcb-vsz7t   1/1     Running    7s      k3d-wasm-cluster-agent-0
wasm-spin-74cff79dcb-q4vxr   1/1     Running    7s      k3d-wasm-cluster-agent-0
				
			

The RuntimeClass is doing its job of ensuring the Wasm workloads running on the correct node.

Next up, you’ll ensure they’re executing with the spin runtime and inspect the containerd processes.

Inspect the containerd processes

Exec onto the k3d-wasm-cluster-agent-0 node.

				
					$ docker exec -it k3d-wasm-cluster-agent-0 ash
				
			
Run the following commands from inside the exec session.
 
List running spin processes.
				
					# ps | grep spin

PID    USER  COMMAND
<Snip>
1764  0     {containerd-shim}.../bin/containerd-shim-spin-v1 -namespace k8s.io -id ...
2015  0     {containerd-shim}.../bin/containerd-shim-spin-v1 -namespace k8s.io -id ...
2017  0     {containerd-shim}.../bin/containerd-shim-spin-v1 -namespace k8s.io -id ...
				
			

The output is trimmed, but you can see three containerd-shim-spin-v1 processes. This is one shim process for each of the three replicas.

The long hex ID attached to each of the three shim processes is the ID of the associated container task. This is because containerd runs each Wasm task inside its own container.

Run the following command to list containers on the host. Notice how some of the container task IDs match with the hex IDs associated with the spin shim processes from the previous command. The PID also matches the PID of the spin shim processes.

				
					$ ctr task ls

TASK                                                                PID     STATUS
3f083847f6818c3f76ff0e9927b3a81f84f4bf1415a32e09f2a37ed2a528aed1    2015    RUNNING
f8166727d7d10220e55aa82d6185a0c7b9b7e66a4db77cc5ca4973f1c8909f85    2017    RUNNING
78c8b0b17213d895f4758288500dc4e1e88d7aa7181fe6b9d69268dffafbd95b    1764    RUNNING
</Snip>
				
			

The output is trimmed to only show the Wasm containers.

You can see more detailed info with the info command.

				
					$ ctr containers info 3f083847f6...a37ed2a528aed1

{
    <Snip>
    "Runtime": {
        "Name": "io.containerd.spin.v1",
        <Snip>
        "annotations": {
            "io.kubernetes.cri.container-name": "spin-hello",
            "io.kubernetes.cri.container-type": "container",
            "io.kubernetes.cri.image-name": "ghcr.io/deislabs/containerd-wasm-shims/examples/spin-rust-hello:v0.5.1",
            "io.kubernetes.cri.sandbox-id": "3f083847f6818c3f76ff0e9927b3a81f84f4bf1415a32e09f2a37ed2a528aed1",
            "io.kubernetes.cri.sandbox-name": "wasm-spin-5bd4bd7b9-kqgjt",
            "io.kubernetes.cri.sandbox-namespace": "default"
            <Snip>
				
			
If you examine the output of the previous command, you’ll see typical container constructs such as namespaces and cgroups. The WebAssembly app is executing inside the normal WebAssembly sandbox, which, in turn, is executing inside a minimal container.

Clean up

The following command will delete the app.
				
					$ kubectl delete \
  -f https://raw.githubusercontent.com/nigelpoulton/spin1/main/app.yml
				
			
This command will delete the K3d Kubernetes cluster.
				
					$ k3d cluster delete wasm-cluster
				
			

Summary

The cloud-native ecosystem is working hard to adapt to WebAssembly. It’s now possible to bootstrap Kubernetes worker nodes with Wasm runtimes and use RuntimeClasses so you can run WebAssembly apps on Kubernetes.
 
Platforms such as Docker Desktop, K3d, and Azure Kubernetes Service (AKS) already support Wasm apps on Kubernetes and others will follow quickly.
 
Other ways to run WebAssembly on Kubernetes exist. One example is Red Hat OpenShift integrating Wasm support into crun.
 
Feel free to connect and talk with me on social media:
 

Share this post

Facebook
Twitter
LinkedIn

Books

Special Editions

Contact
Subscribe
Word on the cloud: What's going on in cloud native

Nigel’s Keeping you up-to-date on cloud native. Short & sharp! #Docker #Kubernetes #WebAssembly #Wasm

© 2024 Nigel Poulton – All rights reserved

Search

Looking for something specific?

Try the search facility.