Part 2 — How to run Service Mesh on Openshift, the enterprise kubernetes

Mohit Aggarwal
11 min readOct 7, 2020

In the Part 1 of this blog post I have covered Red Hat Openshift Service Mesh components and architecture in detail. In this blog post I will brief you how to

  • Install Service Mesh on Red Hat Openshift using operators
  • Deploying microservices on Openshift
  • Configure Dynamic Routing with Service Mesh
  • Securing services with Service Mesh and
  • Observe services within the Service Mesh with Kiali

Prerequisites

Before you start with the instaallation of Service Mesh on Red Hat Openshift, you would need to ensure that following prerequisites have been completed.

  1. You should have the Red Hat Openshift 4.x version up and running.
  2. Install the Openshift oc CLI on your laptop. Make sure that the version of oc CLI matches the OpenShift cluster version. see About the CLI.

Installing Service Mesh on Openshift

Service Mesh is deployed using Operators on Red Hat Openshift. Before you install Service Mesh on Openshift, you must first install Elasticsearch, Jaeger and Kiali Operators on Openshift. The operators are available under Operator Hub within the Openshift Platform. After you install the Elasticsearch, Jaeger, and Kiali Operators, then you proceed with the installation of the Red Hat OpenShift Service Mesh Operator.

Follow these steps to install Service Mesh on Openshift.

  1. Login in to the OpenShift cluster with admin privileges.

oc login https://<host>:6443 -u <username> -p <password>

2. Create “istio-system” project

oc new-project istio-system

3. Go to Operator Hub and install Elasticsearch operator, Jaeger operator and Kiali operator. Verify that the operators have been installed successfully.

4. Then install the Service Mesh Operator. Service Mesh Operator defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components.

5. After you have installed the Service Mesh Operator, go to the installed Operators section in the Openshift and click on the Red Hat OpenShift Service Mesh Operator. Under the Istio Service Mesh Control Plane click Create ServiceMeshControlPlane. If you want, you can change the default values in the yaml file or else create servicemesh with the default values provided in the template. The Operator creates Pods, services, and Service Mesh control plane components based on your configuration parameters in the “istio-system” project.

6. Under the Istio Service Mesh Member Roll click Create ServiceMeshMemberRoll. Modify the YAML to add your projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. You can perform this step later, once you have created a project under which you have deployed the microservices.

Now, you are done with the installation of Service Mesh on Openshift. So now let’s deploy microservices on Openshift so that Service Mesh can manage these microservices.

Deploying Microservices on Openshift

There are three microservices i.e. Gateway, Partner and Catalog that will be deployed on OpenShift as an example here. The interactions between these three microservices will be managed using Red Hat OpenShift Service Mesh.

Microservices communication

Perform the following steps to deploy microservices on Openshift

  1. Clone the following git repository on your laptop

https://github.com/rh-maggarwa/openshift-service-mesh

2. Create a project or namespace “servicemesh-demo” in which you will deploy the microservices.

oc new-project servicemesh-demo

3. You will start by deploying the catalog service version 1 to the Openshift.

cd <Go to the path where the git repository is cloned>/catalog

oc create -f kubernetes/catalog-service-template.yml -n servicemesh-demo

4. Create an OpenShift service entry for the catalog service.

oc create -f kubernetes/Service.yml -n servicemesh-demo

5. Monitor the deployment of pods. Wait until the Ready column displays 2/2 pods and the Status column displays Running:

oc get pods -n servicemesh-demo -w

The pod corresponding to the catalog-v1 Deployment includes two containers:

  • catalog — Linux container that includes the business functionality
  • istio-proxy — Red Hat Service Mesh data plane side-car container that is in communication with the Service Mesh data plane.

Because the catalog service is at the end of the service chain (gateway -> partner -> catalog), it will not be exposed to the outside world via a route.

6. Next, you deploy the partner service to Openshift.

cd <Go to the path where the git repository is cloned>/partner

oc create -f kubernetes/partner-service-template.yml -n servicemesh-demo

7. Create an OpenShift service entry for the partner service:

oc create -f kubernetes/Service.yml -n servicemesh-demo

8. Monitor the deployment of pods. Wait until the Ready column displays 2/2 pods and the Status column displays Running:

oc get pods -n servicemesh-demo -w

9. Finally, you deploy the gateway service to Openshift. This completes the list of services:

cd <Go to the path where the git repository is cloned>/gateway

oc create -f kubernetes/gateway-service-template.yml -n servicemesh-demo

10. Create an OpenShift service entry for the gateway service:

oc create -f kubernetes/Service.yml -n servicemesh-demo

11. Monitor the deployment of pods. Wait until the Ready column displays 2/2 pods and the Status column displays Running:

oc get pods -n servicemesh-demo -w

12. Next is to expose the gateway service as gateway

a) The gateway service is the one your users interact with.

b) Inbound traffic into the Service Mesh needs to occur via the Istio ingress-gateway.

13. Apply the Service Mesh Gateway and VirtualService to allow inbound traffic to your gateway service on OpenShift:

oc apply -f kubernetes/service-mesh-gw.yml -n servicemesh-demo

14. Retrieve the URL for the gateway service via the Istio ingress-gateway:

echo “export GATEWAY_URL=$(oc -n istio-system get route istio-ingressgateway -o jsonpath=’{.spec.host}’)” >> ~/.bashrc

source ~/.bashrc

echo $GATEWAY_URL

15. Test the Gateway service

curl $GATEWAY_URL

If you see the output similar to the following, then that means that the service mesh is configured successfully for your microservices. You are now ready to test some of the service mesh capabilities in the following sections.

gateway => partner => catalog v1 from ‘6b576ffcf8-g6b48’: 1

Dynamic Routing with Service Mesh

You can control the flow of traffic and API calls between services by configuring routing rules in Red Hat OpenShift Service Mesh. Service Mesh Route rules control how requests are routed within an Istio service mesh.

Requests can be routed based on the source and destination, HTTP header fields, and weights associated with individual service versions. For example, a route rule could route requests to different versions of a service.

In addition to the usual OpenShift object types like BuildConfig, DeploymentConfig, Service and Route, you also have new object types installed as part of Istio, like VirtualService. Adding these objects to the running OpenShift cluster is how you configure routing rules for Istio.

VirtualService defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service (or subset/version of it) defined in the registry. The source of traffic can also be matched in a routing rule. This allows routing to be customised for specific client contexts.

DestinationRule defines policies that apply to traffic intended for a service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load-balancing pool.

For example, there might be a scenario where you might want to dynamically alter routing between different versions of the microservice. So let’s deploy a second version of the catalog service (v2) and configure istio routing rules

Dynamic routing between different versions of the microservice
  1. Deploy the catalog service version 2 to the Openshift.

cd <Go to the path where the git repository is cloned>/catalog-v2

oc create -f kubernetes/catalog-service-template.yml -n servicemesh-demo

2. Monitor the deployment of pods. Wait until the Ready column displays 2/2 pods and the Status column displays Running:

oc get pods -l application=catalog -n servicemesh-demo -w

3. Execute the following command to view the values of the labels of the v1 version of the catalog:

oc get deploy catalog-v1 -o json -n servicemesh-demo | jq .spec.template.metadata.labels

Output will be -

4. Execute the following command to view the values of the labels of the v2 version of the catalog:

oc get deploy catalog-v2 -o json -n servicemesh-demo | jq .spec.template.metadata.labels

Output will be -

5. Notice from the previous results that the value of the catalog service Selector (application=catalog) matches one of the labels found in both v1 and v2 of the catalog deployments. The implication of this is that OpenShift will load balance incoming requests in a round-robin manner to v1 and v2 of the catalog pods.

6. Test the Gateway service

curl $GATEWAY_URL

gateway => partner => catalog v1 from ‘6b576ffcf8-g6b48’: 2

Here, 6b576ffcf8-g6b48 is the pod running v1, and 2 is the number of times the endpoint was hit.

7. Make another request to the Gateway service.

curl $GATEWAY_URL

gateway => partner => catalog v2 from ‘7764964564-hj8xl’: 1

Here, 7764964564-hj8xl is the pod running v2, and 1 is the number of times the endpoint was hit. By default, OpenShift round-robin load balancing is applied when there is more than one pod behind a service.

8. Now there might be a scenario, where you would like to route all traffic to a single version of the microservice. For example following istio configuration will route all traffic to v2 version of the catalog service.

vi istiofiles/virtual-service-catalog-v2.yml

Output will be -

This definition allows you to configure a percentage of traffic and direct it to a specific version of the catalog service. In this case, 100% of traffic (weight) for the catalog service always goes to pods matching the labels version v2.

The selection of pods here is very similar to the Kubernetes selector model for matching based on labels. So, any service within the service mesh that tries to communicate with the catalog service is always routed to v2 of the catalog service.

9. Route traffic to v2 using the configuration file:

oc create -f istiofiles/destination-rule-catalog-v1-v2.yml -n servicemesh-demo

oc create -f istiofiles/virtual-service-catalog-v2.yml -n servicemesh-demo

Check the output and see that requests are being routed to catalog:v2

Securing services with Service Mesh

Mutual Transport Layer Security (mTLS) is a protocol where two parties authenticate each other at the same time. MTLS can be used without changes to the application or service code. The TLS is handled entirely by the service mesh infrastructure and between the two sidecar proxies.

By default, Red Hat OpenShift Service Mesh is set to permissive mode, where the sidecars in Service Mesh accept both plain-text traffic and connections that are encrypted using mTLS. If a service in your mesh is communicating with a service outside the mesh, strict mTLS could break communication between those services. Use permissive mode while you migrate your workloads to Service Mesh.

The following catalog service example will show you how you can configure mTLS for secure communication within your services.

  1. Edit the DestinationRule of catalog service and add the mTLS configuration as highlighted below.

vi istiofiles/destination-rule-catalog-v1-v2.yml

You can enable the mTLS by adding the “trafficPolicy” section highlighted below in your yml file.

2. Make a request to the Gateway service.

curl $GATEWAY_URL

Now, go to your Kiali dashboard and verify that all requests routed from partner service to the catalog service are secure with mTLS.

Monitor services with Service Mesh

Application Monitoring is an important area for the enterprises who are developing on the microservices based application. In this section, you will see how we can monitor the microservices using Kiali.

At some point when you are developing your microservices architecture, you may want to visualize what is happening in your service mesh. You may have questions like “Which service is connected to which other service?” and “How much traffic goes to each microservice?” But because of the loosely tied nature of microservices architectures, these questions can be difficult to answer.

Those are the kinds of questions that Kiali has the ability to answer — by giving you a big picture of the mesh and showing the whole flow of your requests and data.

Kiali Request Flow Graph

Kiali taps into the data provided by Istio and OpenShift Container Platform to generate its visualizations. It fetches ingress data (such as request tracing with Jaeger), the listing and data of the services, health indexes, and so on.

Kiali runs as a service together with Istio, and does not require any changes to Istio or OpenShift Container Platform configuration (besides the ones required to install Istio).

  1. Get the URL of the Kiali web console and set as an environment variable:

export KIALI_URL=https://$(oc get route kiali -n istio-system -o template — template=’{{.spec.host}}’)

2. Display the KIALI_URL URL:

echo $KIALI_URL

3. Start a web browser on your computer and go to the URL for $KIALI_URL. Login to the Kiali dashboard with your openshift credentials.

Kiali Login Console

4. On the Kiali web console, click Graph on the left hand panel. From the Namespace list, select servicemesh-demo.

The page shows a graph of the microservices, connected by the requests going through them. On this page you can see how the services interact with each other, and you can zoom in or out.

5. In the left-hand panel, click Services. On the Services page you can view a listing of the services that are running in the cluster, and additional information about them such as health status. Observe that the Namespace list is set to servicemesh-demo. This filters the list of services to just those for this demo namespace.

Summary

In this blog post, we have seen how easy it is to install and configure dynamic routing, security between the microservices in the Service mesh. Then we have seen how we can use Kiali to Observe services within the Service Mesh. In a nutshell, the ServiceMesh is a great tool that solves complex problems introduced by the Microservices paradigm. With OpenShift Service Mesh components, enterprises have the right tools which they need to offload the complexity of creating and managing the complex intra service communications in a microservices based application.

References

https://learn.openshift.com/servicemesh?extIdCarryOver=true&sc_cid=701f2000001OH74AAG

https://docs.openshift.com/container-platform/4.5/service_mesh/service_mesh_arch/understanding-ossm.html

https://docs.openshift.com/container-platform/4.5/service_mesh/service_mesh_arch/ossm-kiali.html

https://docs.openshift.com/container-platform/4.5/service_mesh/service_mesh_arch/ossm-jaeger.html

https://docs.openshift.com/container-platform/4.5/service_mesh/service_mesh_arch/ossm-vs-community.html

https://docs.openshift.com/container-platform/4.5/service_mesh/service_mesh_install/preparing-ossm-installation.html

--

--