Last week I was speaking at a conference in Detroit, USA. As part of my visit, I participated in a panel on service meshes.
As most of you weren’t there, here are the take-home points…
What is a service mesh
At the highest level, a service mesh is a form of intelligent application network.
When using the term application network, I’m talking about connectivity and intelligence that operates higher up the stack than things like TCP and UDP. A service mesh brings the following features to augment and improve your applications:
- Secure communications between app services (authentication and encryption via mTLS)
- Greater visibility into application traffic (aggregate telemetry to a central source)
- Intelligence (circuit breakers, backpressure, canaries, A/B releases, intelligent routing…)
- Lots more
What does a Service Mesh look like
A common approach is to add a proxy container into every Pod. We call this a sidecar container, and it transparently intercepts all traffic ingressing and egressing the Pod. This means it can do amazing things with network traffic.
Here’s a quick example…
App-A in Pod-A talks to App-B in Pod-B. However, unbeknown to App-A and App-B there is a proxy sidecar container in each Pod that is intercepting all traffic and doing things like:
- encrypting traffic
- sending detailed telemetry to a central dashboard
- manipulating and optimising traffic flow
As far as App-A and App-B are concerned, they are communicating directly and blissfully unaware of the service mesh (proxy sidecars).
Why do we need a service mesh
A service mesh is the best place to implement all of the features and intelligence previously mentioned.
For example, nobody wants to start coding authentication and encryption (along with certificate management) into every application service. Also, nobody wants to code every application service to expose telemetry, or to implement circuit-breaker logic.
The right approach is to embed these types of intelligence in a service mesh, and deploy all applications to the service mesh. This way, application services are kept clean and simple, and they inherit intelligence form the service mesh for free!
Service meshes are hard but getting easier
It’s relatively hard to deploy a service mesh to a Kubernetes cluster you built yourself. However, if you’ve got the engineering talent, it’s definitely worth the effort!
However, it’s incredibly simple to install a service mesh on some of the hosted Kubernetes platforms. For example, you can install the Istio service mesh as part of a new GKE cluster with a couple of simple mouse clicks.
Istio vs Linkerd
Linkerd (pronounced “linker-dee”) is probably the original service mesh. It’s an official CNCF project and somewhat simpler than Istio with a smaller scope.
Istio came from Google and has recently seen a lot of uptake from the community. Istio is trying to do more than Linkerd.
Both are great technologies, but it feels like Istio is gaining the upper hand (I reserve the right to be wrong about this).
Service meshes absolutely need to be part of every production Kubernetes cluster.
We’re running line-of-business production applications on Kubernetes, and a lot of us aren’t deploying a mesh. This creates several unwanted scenarios, including:
- A lot of (most) application traffic will be un-encrypted (this is a big problem with microservices apps)
- You lack any decent insight into application traffic and saturation…
Imagine telling your CIO that critical business applications are communicating over insecure connections and you don’t have a handle on what traffic patterns look like! It’d be a very uncomfortable conversation.
On the downside, service meshes can be hard to implement and there is a small performance overhead. But these are both massively outweighed by the advantages.
Final word… in 2-3 year’s we’ll look back and be embarrassed that we deployed important business applications without a service mesh!
And a quick video talking about the same stuff…https://www.youtube.com/embed/_YrlQyhltUk
I’m going to be at KubeCon in San Diego. Give me a shout if you’re gonna be there and want to connect. And feel free to sign-up for my Kubernetes 101 workshop that is taking part on Monday 18th November at the KubeCon conference center. See you there!