Kind: k8s
Netdata can automatically discover monitorable workloads inside a Kubernetes cluster — pods (with their containers and ports) or Services. The discoverer watches the Kubernetes API in real time, exposes per-pod-container or per-service-port targets to the rule engine, and lets you generate collector jobs from labels, annotations, container images, and ports.
This page covers Kubernetes-specific setup. For the broader Service Discovery model and the shared template-helper reference, see Service Discovery.
Each Kubernetes discovery pipeline runs as either a pod discoverer or a service discoverer (selected by the role option). It then:
api_server config — the discoverer uses the standard k8s client config-loader chain).namespaces[], optionally narrowed by label/field selectors.role: pod → one target per (pod, container, container-port) triple. Container env, image, labels, annotations, and node name are all exposed.role: service → one target per (service, service-port) pair, with the cluster-internal DNS name (name.ns.svc:port) as .Address.services: rules against each target, producing collector jobs./etc/netdata/go.d/sd/k8s.conf is not packaged with the agent. On Kubernetes deployments you should install Netdata via the Helm chart — the chart renders both the discoverer config and a curated rule set tailored to your cluster’s Netdata setup.role is a single-valued option. If you want both pod and service discovery, configure two pipelines.local_mode for pods is opt-in: by default the pod discoverer watches all pods in the configured namespaces. Set pod.local_mode: true to restrict to pods on the same node as the Netdata Agent (intended for the parent-on-every-node Helm topology). When local_mode is enabled, the env var MY_NODE_NAME must be set on the Netdata pod (the Helm chart sets this via the downward API).You can configure the k8s discoverer in two ways:
| Method | Best for | How to |
|---|---|---|
| UI | Fast setup without editing files | Go to Collectors -> go.d -> ServiceDiscovery -> k8s, then add a discovery pipeline. |
| File | File-based configuration or automation | Edit /etc/netdata/go.d/sd/k8s.conf and define the discoverer: and services: blocks. |
The supported way to run the k8s discoverer is via the Netdata Helm chart. The chart provisions the right RBAC (get/list/watch on pods, services, configmaps, secrets), wires MY_NODE_NAME for local_mode, and ships a stock services: rule set tuned to its parent/child topology.
The discoverer needs the following verbs from its service account:
pods: get, list, watch (cluster-wide or per-namespace, matching namespaces[])services: get, list, watch (only when role: service)configmaps, secrets: get, list, watch (only when role: pod — used to enrich pod targets with referenced env values)The Helm chart’s default RBAC role covers all of these.
pod.local_mode: true, set MY_NODE_NAMEWhen local_mode is enabled, the Netdata Agent reads its node name from MY_NODE_NAME. The Helm chart sets this via the downward API:
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
The configuration file has two top-level blocks: discoverer: (the options below) and services: (rules that turn discovered pods/services into collector jobs — see Service Rules).
After editing the file, restart the Netdata Agent to load the updated discovery pipeline. The default and recommended deployment path on Kubernetes is the Netdata Helm chart — the chart renders this file and the rules for you.
| Option | Description | Default | Required |
|---|---|---|---|
| role | What to discover. One of pod or service. | yes | |
| namespaces | Namespaces to watch. Empty means all namespaces. | [] (all namespaces) | no |
| selector.label | Label selector applied at watch time (server-side filtering). | no | |
| selector.field | Field selector applied at watch time. | no | |
| pod.local_mode | Restrict pod discovery to pods on the same node as the Netdata Agent. | false | no |
pod — produces one target per (pod, container, port) triple. Use this for the bulk of in-cluster monitoring (databases, exporters, applications).service — produces one target per (service, port) pair. Use this for cluster-internal endpoints monitored at the service-name DNS level.To watch both, configure two pipelines.
Standard Kubernetes label-selector syntax: app=foo, environment in (prod, staging), etc. Reduces watch traffic when only a subset of pods/services is interesting.
Useful field selectors: status.phase=Running, spec.nodeName=node-1. When pod.local_mode: true, the discoverer automatically appends spec.nodeName=$MY_NODE_NAME.
Only applies when role: pod. Requires MY_NODE_NAME to be set on the Netdata container. Used by the Helm chart’s parent-on-every-node topology to keep watch traffic local.
Collectors -> go.d -> ServiceDiscovery -> k8s.Define the discovery pipeline in /etc/netdata/go.d/sd/k8s.conf.
The file has two top-level blocks: discoverer: (the options above) and services: (rules that turn discovered targets into collector jobs — see Service Rules).
After editing the file, restart the Netdata Agent to load the updated discovery pipeline.
The configuration the Helm chart renders by default for the parent-on-every-node topology.
disabled: no
discoverer:
k8s:
role: pod
pod:
local_mode: true
services: [ ]
Watch only Services in the monitoring namespace, scoped by a label selector.
disabled: no
discoverer:
k8s:
role: service
namespaces:
- monitoring
selector:
label: app.kubernetes.io/component=metrics-endpoint
services: [ ]
The service account needs get, list, watch on pods (or services), and on configmaps + secrets for pod-role env enrichment. The Helm chart provisions this; out-of-Helm deployments must bind the equivalent role.
local_mode enabled but env “MY_NODE_NAME” not setWhen pod.local_mode: true is set but MY_NODE_NAME is missing, the discoverer fails at startup with local_mode is enabled, but env 'MY_NODE_NAME' not set. Set the env via the downward API on the Netdata pod (the Helm chart does this).
namespaces[].selector.label or selector.field is set, verify the targets actually carry the matching labels/fields.local_mode, only pods on the same node as the Netdata pod are visible.The Address resolves to the pod’s CNI IP — the Netdata Agent must be able to reach pod IPs. Most CNIs allow this from a pod running in the same cluster, but flat-network requirements differ. For service-role targets, the cluster-internal DNS name (<svc>.<ns>.svc) is used and should always resolve from inside the cluster.
Want a personalised demo of Netdata for your use case?