What is Istio? The Kubernetes service mesh explained

Microservices architectures solve some problems but introduce others. Dividing applications into independent services simplifies development, updates, and scaling. But it also gives you many more moving parts to connect and secure. Managing all the network services—load balancing, traffic management, authentication and authorization, and so on—can become stupendously complex.

The term for this networked space between the services in your Kubernetes cluster is service mesh. A Google project, Istio, is all about providing a way to manage your cluster’s service mesh before it turns into a bramble snarl.

What is a service mesh?

Certain common behaviors tend to spring up around any group of networked applications. For instance, the need to load balance between service instances, or being able to A/B test different combinations of services, or to set up end-to-end authentication across chains of services. These behaviors, and how they’re enacted, are collectively known as a service mesh.

Managing the service mesh shouldn’t be left to the services themselves. No service alone is in a good position to do something so top down, and it really shouldn’t be the service’s job anyway. Better to have a system that sits between the services and the network. This system would supply two key functions: management and abstraction.

  1. Management keeps the services themselves from having to deal with the nitty-gritty of managing network traffic—things like load balancing, routing, retries, and so on.
  2. Abstraction provides a layer of abstraction for admins, making it easy to enact high-level decisions about network traffic in the cluster—policy controls, metrics and logging, service discovery, secure inter-service communications via TLS, etc.

Istio service mesh components

Istio works as a service mesh by providing two basic pieces of architecture for your cluster: a data plane and a control plane.

The data plane handles network traffic between the services in the mesh, by way of a group of network proxies. Istio’s proxying is done through an open source project called Envoy.

The control plane, a service named Istiod, handles service discovery and management, It also generates the certificates used for secure communication in the data plane.

Istio also provides APIs to control these services, which fall into a handful of categories.

Virtual services

A virtual service lets you create rules for how traffic is routed. Each virtual service can be used to route traffic to an actual service in the mesh. For instance, if you are A/B testing two different implementations of a given API, you could route half the traffic to one version of the API. Or you could map calls to different API endpoints in a given domain to different physical servers.

Destination rules

Destination rules control what happens to traffic after it’s been routed through a virtual service. For instance, traffic arriving on different ports could have different load balancing policies.

Gateways

Gateways manage traffic into and out of the mesh as a whole, with load-balancing capabilities and L4-L6 network protocol controls. You can also bind a virtual service to a gateway to control where traffic is directed after that.

The NGINX web server and proxying system can be used as an ingress controller in Istio. This way, NGINX’s features for advanced load balancing and traffic routing can be used to route traffic into the Istio mesh, including features available only in NGINX’s commercial version. If you’re already familiar with NGINX’s routing features, you can leverage them in an Istio mesh this way.

Service entries

Service entries let you add an entry to Istio’s registry of known services. A registered service such as an external API is treated as though it were part of Istio’s mesh, even if it isn’t.

Sidecars

Envoy proxies are configured by default to allow inbound traffic from all ports and to allow outbound traffic to every other workload in the mesh. You can use a sidecar configuration to change this behavior.

Istio ambient mode

A relatively new Istio feature, “ambient mode,” lets you deploy Istio without running an Envoy proxy alongside each Kubernetes application pod. Instead, each Kubernetes cluster node (rather than each application pod) has an Istio agent, which means less overall processing for the traffic routing. It also allows a more transitional approach to rolling out Istio in a Kubernetes cluster. Note that ambient mode is still extremely new, though, and not yet recommended for production use.

Istio service mesh capabilities

The first and most valuable benefit Istio provides is abstraction—a way to keep the complexities of a service mesh at arm’s length. You can make any changes to the mesh programmatically by commanding Istio, instead of by configuring a slew of components by hand and hoping the changes take proper effect. Services connected to the mesh don’t need to be reprogrammed from the inside to follow new network policies or quotas, and the networking spaces between them don’t need to be touched directly either.

Istio also allows you to perform non-destructive or tentative changes to the cluster’s network configuration. If you want to roll out a new network layout, in whole or in part, or A/B test the current configuration against a new one, Istio lets you do it in a top-down way. You can also roll back those changes if they turn out to be unhealthy.

A third advantage is observability. Istio provides detailed statistics and reporting about what’s going on between containers and cluster nodes. If there is an unforeseen issue, if something isn’t adhering to policy, or if changes you made turn out to be counterproductive, you’ll be able to find out about it in short order.

Istio also provides ways to fulfill common patterns that you see in a service mesh. One example is the circuit-breaker pattern, a way to prevent a service from being bombarded with requests if the back end reports trouble and can’t fulfill the requests in a timely way. Istio provides a circuit breaker pattern as part of its standard library of policy enforcements.

Finally, while Istio works most directly and deeply with Kubernetes, it is designed to be platform independent. Istio plugs into the same open standards that Kubernetes itself relies on. Istio can also work in a stand-alone fashion on individual systems, or on other orchestration systems such as Mesos and Nomad.

How to get started with Istio

If you already have experience with Kubernetes, a good way to learn Istio is to take a Kubernetes cluster—not one already in production!—and install Istio on it using your preferred deployment method. Then you can deploy a sample application that demonstrates common Istio features like traffic management and observability. This should give you some ground-level experience with Istio before deploying it for service-mesh duty on your application cluster.

Red Hat, which has invested in Istio as part of the company’s Kubernetes-powered OpenShift project, offers tutorials that guide you through common Istio deployment and management scenarios.

Copyright © 2024 IDG Communications, Inc.

Source