For a production ready Kubernetes cluster deployment, it is recommended you run a cluster of at least 3 worker nodes to support a highly-available control plane installation. Use the following resource settings might serve as a starting point. Requirements will vary depending on cluster size and other factors, so individual testing is needed to find the right values for your environment:
Note: For more info on CPU and Memory resource units and their meaning, see this link
Deployment | CPU | Memory |
---|---|---|
Operator | Limit: 1, Request: 100m | Limit: 200Mi, Request: 100Mi |
Sidecar Injector | Limit: 1, Request: 100m | Limit: 200Mi, Request: 30Mi |
Sentry | Limit: 1, Request: 100m | Limit: 200Mi, Request: 30Mi |
Placement | Limit: 1, Request: 250m | Limit: 150Mi, Request: 75Mi |
Dashboard | Limit: 200m, Request: 50m | Limit: 200Mi, Request: 20Mi |
When installing Dapr using Helm, no default limit/request values are set. Each component has a resources
option (for example, dapr_dashboard.resources
), which you can use to tune the Dapr control plane to fit your environment. The Helm chart readme has detailed information and examples. For local/dev installations, you might simply want to skip configuring the resources
options.
The following Dapr control plane deployments are optional:
To set the resource assignments for the Dapr sidecar, see the annotations here. The specific annotations related to resource constraints are:
dapr.io/sidecar-cpu-limit
dapr.io/sidecar-memory-limit
dapr.io/sidecar-cpu-request
dapr.io/sidecar-memory-request
If not set, the dapr sidecar will run without resource settings, which may lead to issues. For a production-ready setup it is strongly recommended to configure these settings.
For more details on configuring resource in Kubernetes see Assign Memory Resources to Containers and Pods and Assign CPU Resources to Containers and Pods.
Example settings for the dapr sidecar in a production-ready setup:
CPU | Memory |
---|---|
Limit: 300m, Request: 100m | Limit: 1000Mi, Request: 250Mi |
Note: Since Dapr is intended to do much of the I/O heavy lifting for your app, it’s expected that the resources given to Dapr enable you to drastically reduce the resource allocations for the application
The CPU and memory limits above account for the fact that Dapr is intended to a high number of I/O bound operations. It is strongly recommended that you use a monitoring tool to baseline the sidecar (and app) containers and tune these settings based on those baselines.
When deploying Dapr in a production-ready configuration, it’s recommended to deploy with a highly available (HA) configuration of the control plane, which creates 3 replicas of each control plane pod in the dapr-system namespace. This configuration allows for the Dapr control plane to survive node failures and other outages.
HA mode can be enabled with both the Dapr CLI and with Helm charts.
For a full guide on deploying Dapr with Helm visit this guide.
It is recommended to create a values file instead of specifying parameters on the command-line. This file should be checked in to source control so that you can track changes made to it.
For a full list of all available options you can set in the values file (or by using the --set
command-line option), see https://github.com/dapr/dapr/blob/master/charts/dapr/README.md.
Instead of using either helm install
or helm upgrade
as shown below, you can also run helm upgrade --install
- this will dynamically determine whether to install or upgrade.
# add/update the helm repo
helm repo add dapr https://dapr.github.io/helm-charts/
helm repo update
# See which chart versions are available
helm search repo dapr --devel --versions
# create a values file to store variables
touch values.yml
cat << EOF >> values.yml
global.ha.enabled: true
EOF
# run install/upgrade
helm install dapr dapr/dapr \
--version=<Dapr chart version> \
--namespace dapr-system \
--create-namespace \
--values values.yml \
--wait
# verify the installation
kubectl get pods --namespace dapr-system
This command will run 3 replicas of each control plane service in the dapr-system namespace.
Note: The Dapr Helm chart automatically deploys with affinity for nodes with the label kubernetes.io/os=linux
. You can deploy the Dapr control plane to Windows nodes, but most users should not need to. For more information see Deploying to a Hybrid Linux/Windows K8s Cluster
Dapr supports zero downtime upgrades. The upgrade path includes the following steps:
To upgrade the Dapr CLI, download the latest version of the CLI and ensure it’s in your path.
See steps to upgrade Dapr on a Kubernetes cluster.
The last step is to update pods that are running Dapr to pick up the new version of the Dapr runtime.
To do that, simply issue a rollout restart command for any deployment that has the dapr.io/enabled
annotation:
kubectl rollout restart deploy/<Application deployment name>
To see a list of all your Dapr enabled deployments, you can either use the Dapr Dashboard or run the following command using the Dapr CLI:
dapr list -k
APP ID APP PORT AGE CREATED
nodeapp 3000 16h 2020-07-29 17:16.22
When properly configured, Dapr ensures secure communication. It can also make your application more secure with a number of built-in features.
It is recommended that a production-ready deployment includes the following settings:
Mutual Authentication (mTLS) should be enabled. Note that Dapr has mTLS on by default. For details on how to bring your own certificates, see here
App to Dapr API authentication is enabled. This is the communication between your application and the Dapr sidecar. To secure the Dapr API from unauthorized application access, it is recommended to enable Dapr’s token based auth. See enable API token authentication in Dapr for details
Dapr to App API authentication is enabled. This is the communication between Dapr and your application. This ensures that Dapr knows that it is communicating with an authorized application. See Authenticate requests from Dapr using token authentication for details
All component YAMLs should have secret data configured in a secret store and not hard-coded in the YAML file. See here on how to use secrets with Dapr components
The Dapr control plane is installed on a dedicated namespace such as dapr-system
.
Dapr also supports scoping components for certain applications. This is not a required practice, and can be enabled according to your security needs. See here for more info.
Dapr has tracing and metrics enabled by default. It is recommended that you set up distributed tracing and metrics for your applications and the Dapr control plane in production.
If you already have your own observability set-up, you can disable tracing and metrics for Dapr.
To configure a tracing backend for Dapr visit this link.
For metrics, Dapr exposes a Prometheus endpoint listening on port 9090 which can be scraped by Prometheus.
To setup Prometheus, Grafana and other monitoring tools with Dapr, visit this link.