Posts in this series
dapr 101 - Concepts and Setup(This Post)
- dapr 102 - State Management and Tracing
- dapr 103 - Pub/Sub and Observability Metrics
In a world where distribute systems are talk of the town and technology moves at the speed of light, I am not surprised that there are more buzzwords being introduced than any developer
can keep track of. One such word is
Distributed Application Runtime. This is the first part of a series of blog posts where I am try to make sense of this new world order
and write some code to demystify the inner workings on this new framework.
If you visit the home planet of
dapr @ dapr.io, it claims itself to be the following.
Dapr is a portable, event-driven runtime that makes it easy for developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
Yes that is a whole lot of jargons in one sentence. Let us first break this down and see what
dapr actually is and why we should be looking into it.
Today, when you build a microservice it comes with a set of pre-requisite criteria such as
event-driven. In a development
team comprised of developers working on varying programming languages and frameworks, achieving these expectations with consistency is
a complicated process and in order to achieve this developers end up spending a extensive amount of time building the infrastructure.
There are frameworks such as
eventsourcingref for python and
java. However, these are tightly bound to the framework themselves and not extensible across the platforms and can’t be leveraged
against applications being built on a programming language that they don’t support.
This unwanted waste of developer’s time is what
dapr claims to save in a simple words. It will provide these fundamental entities called
buiding blocks1. You are free to consume one of many of
building blocks for your use-case as you see fit.
dapr is build on the following fundamental idea and goal in mind.
- Provide a standardized building block components
- Language and Platform agnostic
- Most importantly, portable and Open API driven
Building Blocks of
dapr in it’s simplest form implements a
sidecar pattern where each container is tagged with a sidecar container that is responsible
for ensuring the service to service communication with proper discovery implemented.
dapr has an opt in building block
component in the form of
state store that can be leveraged by the APIs that need some state
state stores are backed by a defined set of
key-value store that you can opt from.
Currently the following facts hold true for the
state management building block.
redisas the default state management component
- Has support for
ETagout of the box
- Defines an Open API Schema to interacting with state management components.
- It supports a configurable metadata for the state management with respect to the items such as
- Has support for
- Each state store is
key spaced. i.e. Prefixed with specific pattern to identify the kind of state being persisted.
dapr provides as standard
pub/sub mode of communication between the microservices using the
App ID parameters and a defined
schema to exchange the messages
dapr comes with a
building block for including tracing into your services without much additional effort. It leverages the
OpenTelemetry to enable
tracing and metrics collection with the standard
W3C headers for the context.
dapr provides a building block to implement an Actor pattern in the form of a
Virtual Actor pattern where an actor is the
most fundamental unit of computation. Each line of code you write is to simulate an action or operation being performed by an actor.
dapr work with Kubernetes?
I will write a detailed post on this integration in the future as part of this same series. But for now, take a look at the block diagram which will give you a basic understanding of what that integration would look like.
A deployment on
kubernetes include 4 main pods being deployed. If you use the
dapr cli to initialize the services you will notice that these
pods come up in the default namespace and this can be customized if you leverage the
All posts in the
dapr series will be explained and coded using
k3s setup created with the help of
k3d. Please follow the steps below to get the
k3s cluster up and running.
# This is how I got the k3d setup on my Mac. However, please use the platform specific instruction to get your setup running. # https://github.com/rancher/k3d#get ▲ ~ brew install k3d # Create a 3 worker and one leader node. This will create 4 containers and get your cluster up and running. # Let us disable `traefik` for now. Otherwise, it will come in the way of some easy testing of `dapr` # Workflow in this series. △ ~ k3d create cluster k8s --workers 3 --server-arg '--no-deploy=traefik' ▲ ~ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b4d5cefccfaf rancher/k3s:latest "/bin/k3s agent" 20 minutes ago Up 20 minutes k3d-k3s-default-worker-2 ddb2325dc8fa rancher/k3s:latest "/bin/k3s agent" 20 minutes ago Up 20 minutes k3d-k3s-default-worker-1 9f873052075a rancher/k3s:latest "/bin/k3s agent" 20 minutes ago Up 20 minutes k3d-k3s-default-worker-0 1001f1c1c305 rancher/k3s:latest "/bin/k3s server --h…" 20 minutes ago Up 20 minutes 0.0.0.0:6443->6443/tcp k3d-k3s-default-server
However, you are free to use any other dev tooling such as
kind to get your cluster up and running as well. Or a single node k8s would work all the same.
This entire post is written with the following setup. If you are following along with me, you might have to replace a few things to match your operating system specific parts to get it to work while downloading things.
Whenever there is a need for it, I will add a note for the same in the code block section.
Installing the CLI
This CLI is your way of setting up an easy development infra of the
dapr on your
# NOTE: Please download the tar.gz specific to your platform. # Download the cli binary and extract it into your $PATH wget https://github.com/dapr/cli/releases/download/v0.8.0/dapr_darwin_amd64.tar.gz tar -zxvf dapr_darwin_amd64.tar.gz -C /tmp cp /tmp/dapr /usr/local/bin
Check If your Kubernetes Cluster State
▲ ~ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k3d-k3s-default-worker-2 Ready <none> 7m31s v1.18.4+k3s1 172.21.0.5 <none> Unknown 4.19.76-linuxkit containerd://1.3.3-k3s2 k3d-k3s-default-worker-1 Ready <none> 7m29s v1.18.4+k3s1 172.21.0.4 <none> Unknown 4.19.76-linuxkit containerd://1.3.3-k3s2 k3d-k3s-default-server Ready master 7m29s v1.18.4+k3s1 172.21.0.2 <none> Unknown 4.19.76-linuxkit containerd://1.3.3-k3s2 k3d-k3s-default-worker-0 Ready <none> 7m29s v1.18.4+k3s1 172.21.0.3 <none> Unknown 4.19.76-linuxkit containerd://1.3.3-k3s2
Make sure not to forget the
--kubernetes argument while running the
dapr init or else it won’t setup the
▲ ~ dapr init --kubernetes ⌛ Making the jump to hyperspace... ℹ️ Note: this installation is recommended for testing purposes. For production environments, please use Helm ✅ Deploying the Dapr control plane to your cluster... ✅ Success! Dapr has been installed. To verify, run 'kubectl get pods -w' or 'dapr status -k' in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
Once the above command is run, your cluster is up and ready with the most basic components of
dapr configured and ready. Let us take a look at the
cluster state and see what are the components initialized by the
dapr and see what purpose do they serve.
Pods and Services
▲ ~ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-leader-7d557b94bb-rdtqx 1/1 Running 0 6m28s 10.42.1.3 k3d-k3s-default-worker-0 <none> <none> dapr-sentry-58c576ff98-vl8n8 1/1 Running 0 6m47s 10.42.2.3 k3d-k3s-default-worker-1 <none> <none> dapr-operator-75b4f7986b-r25d4 1/1 Running 0 6m47s 10.42.3.3 k3d-k3s-default-worker-2 <none> <none> dapr-sidecar-injector-c898fb49b-v4pnf 1/1 Running 0 6m46s 10.42.3.2 k3d-k3s-default-worker-2 <none> <none> dapr-placement-84f9cd87b7-ckjrl 1/1 Running 1 6m47s 10.42.0.3 k3d-k3s-default-server <none> <none> ▲ ~ kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 8m14s <none> dapr-api ClusterIP 10.43.39.14 <none> 80/TCP 6m57s app=dapr-operator dapr-placement ClusterIP 10.43.245.162 <none> 80/TCP 6m57s app=dapr-placement dapr-sentry ClusterIP 10.43.84.125 <none> 80/TCP 6m57s app=dapr-sentry dapr-sidecar-injector ClusterIP 10.43.252.177 <none> 443/TCP 6m57s app=dapr-sidecar-injector
This is a mutating webhook that will inject the
sidecar containers into your deployment based on specific annotations put in the deployment spec.
This is a simple kubernetes operator that will monitor and provide notification of
dapr components being provisioned or updated in the cluster.
This serves as a certificate authority that will help with mTLS.
mTLS is what
dapr uses as a mechanism to authenticate between all the service to service
This pod will manage the
actor distribution and key range management. If you are not running the
dapr actor building block, this service is not required.
NAME SECRETS AGE dapr-operator 1 13m
daprcontrol components to perform their operational work. Avoid using this
safor any of the application specific workflow.
△ ~ kubectl get clusterrolebinding | grep dapr dapr-operator ClusterRole/cluster-admin 14m dapr-secret-reader ClusterRole/secret-reader 14m
dapr-secret-readeris the binding between the
▲ ~ kubectl get crd --all-namespaces | grep dapr components.dapr.io 2020-07-07T15:20:09Z configurations.dapr.io 2020-07-07T15:20:09Z
components is a
CRD provisioned by the
dapr that is a high level abstraction to represent everything that is part of
dapr under the building block.
configurations defines any configuration of a
dapr component such as a tracing level config and such. We will get into the details of this sooner
than you think.
Setup a Redis State Management Store
For the purpose of this demo, we are not going to go crazy with
helm charts or anything. Let us create a simple
yaml based deployment
that doesn’t really need an auth so that we can easily access them and test using the CLI.
# redis-deployment.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: redis-leader labels: app: redis spec: selector: matchLabels: app: redis role: master tier: backend replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: redis resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 --- apiVersion: v1 kind: Service metadata: name: redis-leader labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend
> kubectl apply -f redis-deployment.yaml deployment.apps/redis-leader created service/redis-leader created
Let us run a
kubectl port-forward command and quickly verify that the instance of the
redis is up and running.
▲ ~ kubectl port-forward deployment/redis-leader 6379 &  27161 ▲ ~ Forwarding from 127.0.0.1:6379 -> 6379 ⚙ 1 Forwarding from [::1]:6379 -> 6379 ▲ ~ redis-cli ⚙ 1 Handling connection for 6379 127.0.0.1:6379> set test test OK 127.0.0.1:6379> get test "test" 127.0.0.1:6379>
dapr Storage Management CR
Once the above test is successfully completed, we can go ahead and let
dapr know that it can use
redis-leader as the state management store. This is done by creating
CustomResource for the
Component kind in
# redis-cr.yaml apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: statestore namespace: default spec: type: state.redis metadata: - name: redisHost value: redis-leader.default
△ ~ kubectl apply -f redis-cr.yaml component.dapr.io/statestore created ▲ ~ kubectl get component NAME AGE statestore 10s
In the next post of this series, we will talk about how to go about writing your first app using
state management and perform some basic service to service interaction
and get the tracing and observability up and running.