Linkerd 2
Linkerd 2 is a zero-config and ultra-lightweight service mesh. Emissary natively supports Linkerd 2 for service discovery, end-to-end TLS (including mTLS between services), and (with Linkerd 2.8) multicluster operation.
Architecture
Linkerd 2 is designed for simplicity, security, and performance. In the cluster, it runs a control plane in its own namespace and then injects sidecar proxy containers in every Pod that should be meshed.
Emissary itself also needs to be interwoven or “meshed” with Linkerd 2, and then configured to add special Linkerd headers to requests that tell Linkerd 2 where to forward them. This ie because mTLS between services is automatically handled by the control plane and the proxies. Istio and Consul allow Emissary to initiate mTLS connections to upstream services by grabbing a certificate from a Kubernetes Secret. However, Linkerd 2 does not work this way, so Emissary must rely on Linkerd 2 for mTLS connections to upstream services. This means we want Linkerd 2 to inject its sidecar into Emissary’s pods, but not Istio and Consul.
Through that setup, Emissary terminates external TLS as the gateway and traffic is then immediately wrapped into mTLS by Linkerd 2 again. Thus we have a full end-to-end TLS encryption chain.
Getting started
In this guide, you will use Linkerd 2 Auto-Inject to mesh a service and use Emissary to dynamically route requests to that service based on Linkerd 2’s service discovery data. If you already have Emissary installed, you will just need to install Linkerd 2 and deploy your service.
Setting up Linkerd 2 requires to install three components. The first is the CLI on your local machine, the second is the actual Linkerd 2 control plane in your Kubernetes Cluster. Finally, you have to inject your services’ Pods with Linkerd Sidecars to mesh them.
-
Install and configure Linkerd 2 instructions. Follow the guide until Step 3. That should give you the CLI on your machine and all required pre-flight checks.
In a nutshell, these steps boil down to the following:
# install linkerd cli tool curl -sL https://run.linkerd.io/install | sh # add linkerd to your path export PATH=$PATH:$HOME/.linkerd2/bin # verify installation linkerd version
-
Now it is time to install Linkerd 2 itself. To do so execute the following command:
# install the Linkerd control plane linkerd install | kubectl apply -f - linkerd check # install the Linkerd dashboard component linkerd viz install | kubectl apply -f - linkerd viz check
This will install Linkerd 2 in your cluster. For more details on installing Linkerd visit their docs.
Note that this simple command automatically enables mTLS by default and registers the AutoInject Webhook with your Kubernetes API Server. You now have a production-ready Linkerd 2 setup rolled out into your cluster!
-
Deploy Emissary. This howto assumes you have already followed the Emissary Getting Started guide. If you haven’t done that already, you should do that now.
-
Configure Emissary to add it to the Linkerd 2 service mesh.
kubectl -n emissary get deploy emissary-ingress -o yaml | \ linkerd inject \ --skip-inbound-ports 80,443 - | \ kubectl apply -f -
This will tell Emissary to add additional headers to each request forwarded to Linkerd 2 with information about where to route this request to. This is a general setting. You can also set
add_linkerd_headers
per Mapping.
Routing to Linkerd 2 services
You’ll now register a demo application with Linkerd 2, and show how Emissary can route to this application using endpoint data from Linkerd 2.
-
Enable AutoInjection on the Namespace you are about to deploy to:
apiVersion: v1 kind: Namespace metadata: name: default # change this to your namespace if you're not using 'default' annotations: linkerd.io/inject: enabled
Save the above to a file called
namespace.yaml
and runkubectl apply -f namespace.yaml
. This will enable the namespace to be handled by the AutoInjection Webhook of Linkerd 2. Every time something is deployed to that namespace, the deployment is passed to the AutoInject Controller and injected with the Linkerd 2 proxy sidecar automatically. -
Deploy the QOTM demo application. You may have already done this as part of the getting started guide, if so, restart the application using the rollout restart command provided below.
--- apiVersion: apps/v1 kind: Deployment metadata: name: qotm namespace: default spec: replicas: 1 selector: matchLabels: app: qotm template: metadata: labels: app: qotm spec: containers: - name: qotm image: docker.io/datawire/qotm:$qotmVersion$ ports: - name: http-api containerPort: 5000 env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP readinessProbe: httpGet: path: /health port: 5000 initialDelaySeconds: 60 periodSeconds: 3 resources: limits: cpu: "0.1" memory: 100Mi --- apiVersion: v1 kind: Service metadata: name: qotm-linkerd2 namespace: default spec: ports: - name: http port: 80 targetPort: 5000 selector: app: qotm ---
Save the above to a file called
qotm.yaml
and deploy it withkubectl apply -f qotm.yaml
If you already had qotm deployed please restart it with
kubectl rollout restart deploy qotm -n default
Watch via
kubectl get pod -w
as the Pod is created. Note that it starts with0/2
containers automatically, as it has been auto-injected by the Linkerd 2 Webhook. -
Verify the QOTM pod has been registered with Linkerd 2. You can verify the QOTM pod is registered correctly by accessing the Linkerd 2 Dashboard.
linkerd dashboard
Your browser should automatically open the correct URL. Otherwise, note the output from the above command and open that in a browser of your choice.
-
Create a
Mapping
for theqotm-Linkerd2
service.--- apiVersion: getambassador.io/v3alpha1 kind: Mapping metadata: name: linkerd2-qotm spec: hostname: "*" prefix: /qotm-linkerd2/ service: qotm-linkerd2
Save the above YAML to a file named qotm-mapping.yaml
, and apply it with:
kubectl apply -f qotm-mapping.yaml
to apply this configuration to your Kubernetes cluster. Note that in the above config there is nothing special to make it work with Linkerd 2. The general config for Emissary already adds Linkerd Headers when forwarding requests to the service mesh.
-
Send a request to the
qotm-Linkerd2
API.curl -L http://$AMBASSADOR_IP/qotm-Linkerd2/ {"hostname":"qotm-749c675c6c-hq58f","ok":true,"quote":"The last sentence you read is often sensible nonsense.","time":"2019-03-29T22:21:42.197663","version":"1.7"}
Congratulations! You’re successfully routing traffic to the QOTM application, the location of which is registered in Linkerd 2. The traffic to Emissary is not TLS secured, but from Emissary to the QOTM an automatic mTLS connection is being used.
If you now configure TLS termination in Emissary, you have an end-to-end secured connection.
Multicluster operation
Linkerd 2.8 can support multicluster operation, where the Linkerd mesh transparently bridges from one cluster to another, allowing seamless access between the two. This works using the Linkerd “service mirror controller” to discover services in the target cluster, and expose (mirror) them in the source cluster. Requests to mirrored services in the source cluster are transparently proxied via Emissary in the target cluster to the appropriate target service, using Linkerd’s automatic mTLS to protect the requests in flight between clusters. By configuring Linkerd to use the existing Emissary as the ingress gateway between clusters, you eliminate the need to deploy and manage an additional ingress gateway.
Initial multicluster setup
-
Install Emissary and the Linkerd multicluster control plane. Make sure you’ve also linked the clusters.
-
Inject Emissary deployment with Linkerd (even if you have AutoInject enabled):
kubectl -n emissary get deploy emissary-ingress -o yaml | \ linkerd inject \ --skip-inbound-ports 80,443 \ --require-identity-on-inbound-ports 4183 - | \ kubectl apply -f -
(It’s important to require identity on the gateway port so that automatic mTLS works, but it’s also important to let Emissary handle its own ports. AutoInject can’t handle this on its own.)
-
Configure Emissary as normal for your application.
At this point, your Emissary installation should work fine with multicluster Linkerd as a source cluster: you can configure Linkerd to bridge to a target cluster, and all should be well.
Using the cluster as a target cluster
Allowing the Emissary installation to serve as a target cluster requires explicitly giving permission for Linkerd to mirror services from the cluster, and explicitly telling Linkerd to use Emissary as the target gateway.
-
Configure the target cluster Emissary to allow insecure routing.
When Emissary is running in a Linkerd mesh, Linkerd provides transport security, so connections coming in from the Linkerd in the source cluster will always be HTTP when they reach Emissary. Therefore, the
Host
CRDs corresponding to services that you’ll be accessing from the source cluster must be configured toRoute
insecure requests. More information on this topic is available in theHost
documentation; an example might beapiVersion: getambassador.io/v3alpha1 kind: Host metadata: name: linkerd-host spec: hostname: host.example.com acmeProvider: authority: none requestPolicy: insecure: action: Route
-
Configure the target cluster Emissary to support Linkerd health checks.
Multicluster Linkerd does its own health checks beyond what Kubernetes does, so a
Mapping
is needed to allow Linkerd’s health checks to succeed:apiVersion: getambassador.io/v3alpha1 kind: Mapping metadata: name: public-health-check namespace: ambassador spec: hostname: "*" prefix: /-/ambassador/ready rewrite: /ambassador/v0/check_ready service: localhost:8877 bypass_auth: true
When configuring Emissary, Kubernetes is usually configured to run health checks directly against port 8877 – however, that port is not meant to be exposed outside the cluster. The
Mapping
permits accessing the health check endpoint without directly exposing the port.(The actual prefix in the
Mapping
is not terribly important, but it needs to match the metadata supplied to the service mirror controller, below.) -
Configure the target cluster Emissary for the service mirror controller.
This requires changes to the Emissary’s
deployment
andservice
. For all of these commands, you will need to make sure your Kubernetes context is set to talk to the target cluster.In the
deployment
, you need theconfig.linkerd.io/enable-gateway
annotation
:kubectl -n emissary patch deploy emissary-ingress -p=' spec: template: metadata: annotations: config.linkerd.io/enable-gateway: "true" '
In the
service
, you need to provide appropriate namedport
definitions:- `mc-gateway` needs to be defined as `port` 4143 - `mc-probe` needs to be defined as `port` 80, `targetPort` 8080 (or wherever Emissary is listening)
kubectl -n emissary patch svc emissary-ingress --type='json' -p='[ {"op":"add","path":"/spec/ports/-", "value":{"name": "mc-gateway", "port": 4143}}, {"op":"replace","path":"/spec/ports/0", "value":{"name": "mc-probe", "port": 80, "targetPort": 8080}} ]'
Finally, the
service
also needs its own set ofannotation
s:kubectl -n emissary patch svc emissary-ingress -p=' metadata: annotations: mirror.linkerd.io/gateway-identity: ambassador.ambassador.serviceaccount.identity.linkerd.cluster.local mirror.linkerd.io/multicluster-gateway: "true" mirror.linkerd.io/probe-path: -/ambassador/ready mirror.linkerd.io/probe-period: "3" '
(Here, the value of
mirror.linkerd.io/probe-path
must match theprefix
using for the probeMapping
above.) -
Configure individual exported services. Adding the following annotations to a service will tell the service to use Emissary as the gateway:
kubectl -n $namespace patch svc $service -p=' metadata: annotations: mirror.linkerd.io/gateway-name: emissary-ingress mirror.linkerd.io/gateway-ns: emissary-ingress '
This annotation will tell Linkerd that the given service can be reached via the Emissary in the
emissary
namespace. -
Verify that all is well from the source cluster.
For all of these commands, you’ll need to set your Kubernetes context for the source cluster.
First, check to make that the clusters are correctly linked:
linkerd check --multicluster
Next, make sure that the Emissary gateway shows up when listing active gateways:
linkerd multicluster gateways
At this point, all should be well!
More information
For more about Emissary’s integration with Linkerd 2, read the service discovery configuration documentation. For further reading about Linkerd 2 multi-cluster, see the install documentation and introduction.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.