This chapter is about deployment manifests and common resources you likely want to include.
ClusterRole (with an associated binding) is necessary for your controller to function in-cluster. Below we list the common rules you need for the basics:
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kube-rs-controller rules: # You want access to your CRD if you have one # Replace documents with plural resource name, and kube.rs with your group - apiGroups: ["kube.rs"] resources: ["documents", "documents/status", "documents/finalizers"] verbs: ["get", "list", "watch", "patch", "update"] # If you want events - apiGroups: ["events.k8s.io"] resources: ["events"] verbs: ["create"]
See security#Access Constriction to ensure the setup is as strict as is needed.
Two Event structs
We do not provide any hooks to generate RBAC from Rust source (it's not super helpful), so it is expected you put the various rules you need straight in your chart templates / jsonnet etc.
See controller-rs/rbac for how to hook this up with
Below is a starter
netpol here that allows DNS, talking to the Kubernetes apiserver, and basic observability such as pushing otel spans, and having metrics scraped by
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: kube-rs-controller labels: app: kube-rs-controller namespace: controllers spec: podSelector: matchLabels: app: kube-rs-controller policyTypes: - Ingress - Egress egress: # Pushing tracing spans to an opentelemetry collector - to: - namespaceSelector: matchLabels: name: opentelemetry-operator-system ports: # jaeger thrift - port: 14268 protocol: TCP # OTLP gRPC - port: 4317 protocol: TCP # OTLP HTTP - port: 4318 protocol: TCP # zipkin - port: 9411 protocol: TCP # Kubernetes apiserver - to: - ipBlock: # range should be replaced by kubernetes endpoint addresses from: # kubectl get endpoints kubernetes -oyaml cidr: 10.20.0.2/32 ports: - port: 443 protocol: TCP - port: 6443 protocol: TCP # DNS - to: - podSelector: matchLabels: k8s-app: kube-dns ports: - port: 53 protocol: UDP ingress: # prometheus metric scraping support - from: - namespaceSelector: matchLabels: name: monitoring podSelector: matchLabels: app: prometheus ports: - port: http protocol: TCP
Adjust your app labels, names, namespaces, and ingress port names to your own values. Consider using the Network Policy Editor for more interactive sanity.
See controller-rs/networkpolicy for how to hook this up with
Some notes on the above:
- apiserver egress is complicated. A
defaultsometimes work, but the safest is get the
endpoints. See the controller-rs/netpol pr. Cilium's counterpart of
toEntities: [ kube-apiserver ]looks friendlier.
- DNS egress should work for both
prometheusport and app labels might depend on deployment setup, drop lines from the strict default, or tune values as you see fit
opentelemetry-collectorvalues are the regular defaults from the collector helm chart - change as you see fit
- the policy editor needs a non-aliased integer port - while valid, it will reject