WebAssembly (Wasm) is a superstar web technology that runs in all the major browsers and has made the web faster, more secure, and more portable. Brilliant!
- A quick history of WebAssembly
- Why the cloud needs WebAssembly
- WebAssembly apps are smaller than containers
- WebAssembly apps are faster than containers
- WebAssembly apps are more secure than containers
- WebAssembly apps are more portable than containers
- Cloud native WebAssembly
- WebAssembly and Kubernetes
A quick history of WebAssembly
WebAssembly first showed up around 2017, and back then it was all about the web and web apps — making them faster, more secure, and more portable. That shouldn’t be a surprise considering it was developed by web giants like Apple, Google, Microsoft, and Mozilla.
Like the web, the cloud is also on a never-ending journey towards smaller, faster, more secure, and more portable.
Well… WebAssembly apps are smaller than containers, they’re faster than containers, they’re more secure than containers, and they’re more portable than containers. This makes them a perfect fit for the next wave of cloud computing.
In fact, Solomon Hykes, founder of Docker, famously tweeted that they wouldn’t have needed to invent Docker if WebAssembly existed in 2008. And yes, I know it’s painful how often this tweet is quoted in relation to Wasm.
To be fair, he also posted a follow-up tweet suggesting WebAssembly will work alongside containers and not replace them. I’m not sure I agree with him, but I’ll save those thoughts for a future post.
An example of how small WebAssembly apps are, can be seen from two similar artifacts in my own Docker Hub repos.
Both are simple apps that output text, and both are built with standard tools. The containerised app was built from a small base image, whereas the Wasm app is a Rust app compiled with the
--release flag. No other steps were taken to keep them small.
As can be seen from the image below, the Wasm app is nearly 10x smaller than the containerised app.
As a general rule, containers start faster and execute faster than virtual machines. The same is true for WebAssembly apps, they start faster and execute faster than containers.
It’s approximately true to say that VMs take minutes to start, containers take seconds to start, and Wasm apps can take milliseconds to start.
Some of the data on pages 3 and 4 of this short report show that Wasm apps can start anywhere between 10x and 500x faster than containers, and that execution times can be 10x faster.
Aside from the report, it’s widely accepted that cold-start times for Wasm apps are game-changing and enable true scale-to-zero architectures — Wasm cold start times are so fast that you don’t need to maintain a pool of pre-warmed containers ready to service requests.
Before going any further, I want to acknowledge the incredible work done by the community in securing containers and container orchestration platforms. It’s easier than ever to run highly secure containerised workloads.
However, the architecture of containers creates a far more open and far less secure starting point than WebAssembly.
At a high-level, containers start out with a wide-open allow-by-default model of broad access to the host kernel. Locking those doors and plugging those holes requires a ton of effort. In other words, containers aren’t very secure and they trust the apps they run.
WebAssembly apps execute in a deny-by-default sandbox outside of the kernel where all access to capabilities has to be explicitly allowed. In other words, the WebAssembly sandbox is secure and distrusts the apps it runs.
Remember, the WebAssembly sandbox has been battle-tested over many years of running untrusted code from the web.
The architecture of containers means they’re not very portable.
For example, a container built for
linux/amd64 won’t work on
linux/arm64. It obviously won’t work on
This results in image sprawl where organisations have to build and maintain multiple images for the same app — one for each of the different OS and CPU architectures in their environments.
WebAssembly solves this issue by creating a single Wasm module that runs everywhere. It does this by implementing its own bytecode format that requires a runtime to execute. You build your app once as
wasm32/wasi and then a Wasm runtime on any host can execute it.
As a quick example, you can build a Wasm app on your laptop and then use the wasmtime runtime to execute it on any combination of Linux, macOS, and Windows, on AMD64 or ARM64. Other Wasm runtimes exist that will run it on even more exotic architectures such as those found on IoT and edge devices.
The net result is that WebAssembly delivers on the promise of build once, run anywhere.
“Cloud native WebAssembly” is using WebAssembly in the cloud for cloud apps and cloud use-cases. You’ll sometimes hear it called cloud-side WebAssembly, server-side WebAssembly, or WebAssembly on the server.
Thanks to startups and entrepreneurs we already have the first wave of cloud-side Wasm runtimes and tools. We just need to write the apps and use the tools to run them.
You take a regular cloud app requirement and code it using your favorite languages such as C, C++, Rust, Go and more.
The first difference comes at compile time. Instead of compiling to an OS and architecture such as
linux/arm64, you compile to WebAssembly (
wasm32/wasi). This is because Wasm is its own bytecode format.
However, Wasm bytecode needs a runtime to execute it. That’s fine, lots of Wasm runtimes exist and you’ll be able to find one for just about any requirement. A couple of popular cloud examples include:
- wasmtime. This is a Bytecode Alliance project and is designed to run on servers and the cloud.
- WasmEdge. This is a CNCF project and has a bit more of a focus on edge devices.
Lots of other Wasm runtimes exist, but the point is, the runtime translates the Wasm bytecode into the native machine of your cloud server.
The following diagram shows a single Wasm binary executing on a wide variety of cloud and edge platforms.
If you like to learn by doing, this article walks you through writing a hello world app in Rust, compiling it to WebAssembly, hosting it on Docker Hub, and running it with Docker and the WasmEdge runtime.
As containerd is the most popular runtime used by Kubernetes, the work is opening the door for Kubernetes to schedule and manage Wasm apps with minimal effort.
Other ways of bringing Wasm to Kubernetes have been tried (Krustlet). However, runwasi and the shim approach seem to be the most likely way forward, and we should expect to see Wasm apps on Kubernetes very soon.
WebAssembly (Wasm) is a battle-hardened technology that has made the web faster, safer, and extremely portable.
“Cloud native WebAssembly” is using Wasm on servers and in the cloud, and using orchestration tools such as Kubernetes to deploy and manage Wasm apps.
The main forces driving Wasm into the cloud native ecosystem are that Wasm apps are a lot smaller, a lot faster, more secure, and more portable than containers!
You write apps as normal, compile them as Wasm binaries, and execute them on any architecture and OS with a Wasm runtime.
It’s early days with cloud native Wasm. However, the future is bright, and now is a great time to get involved.
Other WebAssembly/Wasm stuff
If you liked this article, check out some of my other WebAssembly articles.
You can also subscribe to my Word on the cloud newsletter. It’s short and keeps you up-to-date with the best stuff going on around cloud native.