Part 1 — Why Red Hat Openshift Service Mesh?

Mohit Aggarwal
7 min readOct 6, 2020

--

Organisations as part of their digital transformation journey are breaking down their large monolithic applications into smaller services a.k.a microservices which they can build, deploy, run and scale independently.

Monolith to Microservices

But the challenge is that as the number of microservices increases in the system, the communication between these micro services becomes increasingly complex and it becomes harder and harder to understand and manage them.

Microservices Distributed Architecture

The reason for this complexity arises because what used to be a function call from one piece of code to another in the monolith application, has now become an API call over the network which means we are relying a lot more on the network characteristics than we used to when we developed monolith applications. Now with API calls over the network we cannot ignore the fact the communication between microservices are now dependent on the reliability of the network. And to deal with network related issues, developers are trying to build resilience in their micro services by writing code for retries, timeouts, circuit breaker, tracing, load balancing etc. This results in developers putting a lot of effort in developing and testing these cloud native capabilities within their micro services instead of focusing on writing the business logic.

What is a Service Mesh?

So the term service mesh is used to describe the network of microservices that make up such applications and the interactions between them. It has the goal to reduce the complexity of services deployments and to ease the strain on your development teams — within the scope of connect, secure, control, and observe.

Red Hat OpenShift Service Mesh is based on the open source Istio project. It adds a transparent layer on existing distributed applications without requiring any changes to your microservices code. So Instead of making services responsible for dealing with those complexities and adding more and more code into each micro service just to deal with cloud-native concerns, the openshift platform is responsible for providing those non-functional capabilities like retries, timeout, circuit breaker, load balancing, service discovery, distributed tracing etc to any application (existing or new, in any programming language or framework) that is running on the openshift platform. The services then can really be micro and focus on their business logic rather than cloud-native complexities.

Service mesh capabilities are broadly defined under four key categories.

  • Traffic Management — Rules and traffic routing lets you control the flow of traffic and API calls between services.
  • Service Identity and Security — Enforce consistently across diverse protocols and runtimes with little or no application changes.
  • Policy Enforcement — Apply to the interaction between services and ensure they are enforced. Changes are made by configuring the mesh, not by changing application code.
  • Observability — Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify and fix issues.

You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy to relevant services in the mesh that intercepts all network communication between microservices. You configure and manage the Service Mesh using the control plane features.

Without a service mesh, each microservice needs to be coded with logic to govern the service-to-service communication, which means developers are less focused on business goals. It also means communication failures are harder to diagnose because the logic that governs interservice communication is hidden within each service.

Service Mesh Architecture

The figure below depicts the OpenShift Service Mesh architecture. OpenShift Service Mesh is logically split into a data plane and a control plane.

Service Mesh Architecture
  • Data Plane — The data plane is a set of intelligent Envoy proxies deployed as sidecar in a pod. The envoy proxies intercept and control all inbound and outbound network communication between microservices in the service mesh. Sidecar proxies also communicate with Mixer, which is a general-purpose policy and telemetry hub.
  • Control Plane — The control plane manages and configures proxies to route traffic, and configures Mixers to enforce policies and collect telemetry. The control plane comprises of the following components

a) Mixer — Mixer enforces access control and usage policies (such as authorisation, rate limits, quotas, authentication, and request tracing) and collects telemetry data from the Envoy proxy and other services.

Mixer provides three core features:

  1. Precondition Checking. Enables callers to verify a number of preconditions before responding to an incoming request from a service consumer. Preconditions can include whether the service consumer is properly authenticated, is on the service’s whitelist, passes ACL checks, and more.
  2. Quota Management. Enables services to allocate and free quota on a number of dimensions, Quotas are used as a relatively simple resource management tool to provide some fairness between service consumers when contending for limited resources. Rate limits are examples of quotas.
  3. Telemetry Reporting. Enables services to report logging and monitoring. In the future, it will also enable tracing and billing streams intended for both the service operator as well as for service consumers.

b) Pilot — Pilot configures the proxies at runtime. Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (for example, A/B tests or canary deployments), and resiliency (timeouts, retries, and circuit breakers).

c) Citadel — Citadel issues and rotates certificates. Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Operators can enforce policies based on service identity rather than on network controls using Citadel.

d) Galley — Galley ingests the service mesh configuration, then validates, processes, and distributes the configuration. Galley protects the other service mesh components from obtaining user configuration details from OpenShift Container Platform.

Red Hat Openshift Service Mesh Components

The figure below depicts the Red Hat OpenShift Service Mesh components.

Red Hat Openshift Service Mesh Components

The components of Red Hat OpenShift service mesh are broken down as follows:

  • Istio — Istio is at the heart of the solution, consisting of a coordinator for scheduling a series of proxy servers which allow us to control the flow of traffic throughout the system.
  • Prometheus & Grafana — For providing telemetry and visualisation of a wide swath of metrics it includes prometheus and grafana. This combination allows us to automatically capture information directly from a number of network endpoints (Envoy Proxies) deployed as a part of the mesh and produce graphs because of the strong structuring of data.
  • Jaeger — Jaeger is included as an open tracing compliant answer for capturing requests spanning across multiple services within the mesh. This gives us the capability of correlating data across and entire call chain between services.
  • Kiali — Kiali is a net-new project created by Red Hat specifically for use with Istio allowing for the visualisation of an entire graph of services scheduled within an instance of the service mesh.

Red Hat OpenShift Service Mesh gives you an easy way to create a network of deployed services that provide discovery, load balancing, service-to-service authentication, failure recovery, metrics & monitoring. It also provides more complex operational functions including A/B testing, Canary releases, Rate limiting, Access control & End-to-end authentication.

Red Hat Openshift Service Mesh Benefits

  • OpenShift Container Platform now provides the service mesh capabilities, so existing polyglot monoliths as well as polyglot microservices are fully supported. Irrespective of the runtimes, you can now make use of the service mesh capabilities
  • Developers can focus on the business implementation rather than worrying about programming overhead oc communication in a service mesh
  • Operator makes it easy to install the Service Mesh components on OpenShift Container Platform
  • Performance metrics can suggest ways to optimise the service mesh
  • Applications are more resilient to downtime
  • Jaeger provides distributed tracing capabilities which can be visualised using Kiali. This makes it easier to recognise and diagnose problems.

Red Hat Openshift Service Mesh Installation

Openshift Service Mesh is deployed using Operators. Before you install Service Mesh on Openshift, you must first install these Operators:

  • Elasticsearch — Based on the open source Elasticsearch project that enables you to configure and manage an Elasticsearch cluster for tracing and logging with Jaeger.
  • Jaeger — based on the open source Jaeger project, lets you perform tracing to monitor and troubleshoot transactions in complex distributed systems.
  • Kiali — based on the open source Kiali project, provides observability for your service mesh. By using Kiali you can view configurations, monitor traffic, and view and analyse traces in a single console.

After you install the Elasticsearch, Jaeger, and Kiali Operators, then you install the Red Hat OpenShift Service Mesh Operator. The Service Mesh Operator defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components.

This doesn’t end here….

So, we’ve gotten this far with Part 1 of the series: “Why Red Hat Openshift Service Mesh?” Although we’ve advanced quite a lot, there is still a lot more to cover and learn. So, don’t miss the next article if you want to find out exactly how to install and configure Service mesh on Red Hat Openshift. Stay tuned for the next post, PART 2.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Mohit Aggarwal
Mohit Aggarwal

Written by Mohit Aggarwal

Senior Specialist Solution Architect @ Red Hat

No responses yet

Write a response