Kind: http
Netdata can pull a list of monitorable targets from any HTTP endpoint you control — a CMDB API, an internal asset registry, a static file served by nginx, or a Prometheus-style file_sd export. The discoverer fetches the endpoint, decodes JSON or YAML, and feeds each item into the services: rule engine. This is the “bring your own source-of-truth” discoverer.
This page covers HTTP-specific setup. For the broader Service Discovery model and the shared template-helper reference, see Service Discovery.
Each discovery cycle, the discoverer:
url over HTTP/HTTPS, honouring all standard go.d collector HTTP options (auth, headers, TLS, proxy, timeout).format (auto / json / yaml). With format: auto, the decoder uses Content-Type if it is unambiguous, otherwise tries JSON first then YAML.[ item, item, … ]) or an envelope ({ "items": [ … ] }). Anything else is rejected..Item (the decoded element — could be a string, a map, a number, …), .TUID, and .Hash.services: rules against each target. The default stock rule passes the item through unchanged via the toYaml helper, so an endpoint that already serves go.d job configurations works with zero rule authoring.interval: 0) fetches a single time when the pipeline starts. It does not refetch on SD reload — recreate the pipeline to refresh.bearer_token_file under /var/run/secrets/ is treated as optional when Netdata is not running in Kubernetes (so the same config can be used in a Helm deployment without erroring out on dev hosts).You can configure the http discoverer in two ways:
| Method | Best for | How to |
|---|---|---|
| UI | Fast setup without editing files | Go to Collectors -> go.d -> ServiceDiscovery -> http, then add a discovery pipeline. |
| File | File-based configuration or automation | Edit /etc/netdata/go.d/sd/http.conf and define the discoverer: and services: blocks. |
Stand up an HTTP endpoint that returns either a top-level array ([ "https://a/health", "https://b/health" ]) or an envelope ({ "items": [...] }). Items can be primitives (strings, numbers), maps, or any nestable value the rule engine knows how to consume.
toYaml. Zero rule authoring on the Netdata side.services: rules that map the data to the right collector module. More work, more flexibility.The configuration file has two top-level blocks: discoverer: (the options below) and services: (rules that turn fetched items into collector jobs — see Service Rules).
After editing the file, restart the Netdata Agent to load the updated discovery pipeline.
| Option | Description | Default | Required |
|---|---|---|---|
| url | HTTP/HTTPS endpoint that returns the items. | yes | |
| interval | How often to refetch the endpoint. | 1m | no |
| format | Response format. One of auto, json, yaml. | auto | no |
| timeout | Per-request HTTP timeout. | 2s | no |
| headers / username / password / bearer_token_file / proxy_url / tls_skip_verify / etc. | All standard go.d HTTP options are accepted (basic auth, bearer tokens, custom headers, HTTP proxy, TLS options). | no |
Must be a fully-qualified http:// or https:// URL. The endpoint is expected to return either a bare array or an {"items": [...]} envelope (see Service Rules for the input model).
Set to 0 for one-shot mode — the endpoint is fetched once when the pipeline starts and never again. SD reload does not retrigger; recreate the pipeline to refresh.
With auto, the decoder uses Content-Type when it is unambiguous (application/json, application/yaml, *+json, *+yaml), otherwise tries JSON first then YAML.
See any go.d HTTP-based collector (httpcheck, prometheus, nginx, …) for the full set. Notable: when bearer_token_file points under /var/run/secrets/ and Netdata is not running inside Kubernetes, missing token files are silently ignored.
Collectors -> go.d -> ServiceDiscovery -> http.Define the discovery pipeline in /etc/netdata/go.d/sd/http.conf.
The file has two top-level blocks: discoverer: (the options above) and services: (rules that turn discovered targets into collector jobs — see Service Rules).
After editing the file, restart the Netdata Agent to load the updated discovery pipeline.
The endpoint serves go.d job configurations directly. Each item must include a module field. The stock rule pipes the item through toYaml unchanged.
disabled: no
discoverer:
http:
url: https://cmdb.example.com/netdata/jobs.yaml
interval: 5m
format: auto
services:
- id: passthrough
match: '{{ true }}'
config_template: |
{{ .Item | toYaml }}
The endpoint returns [ "https://a/health", "https://b/health" ]. Map each URL to an httpcheck job.
disabled: no
discoverer:
http:
url: https://cmdb.example.com/netdata/health-urls.json
interval: 1m
services:
- id: httpcheck
match: '{{ kindIs "string" .Item }}'
config_template: |
name: {{ .TUID }}
url: {{ .Item }}
The endpoint returns [ { "name": "api", "url": "https://api.example.com/health" }, … ].
disabled: no
discoverer:
http:
url: https://cmdb.example.com/netdata/services.json
services:
- id: httpcheck
match: '{{ and (kindIs "map" .Item) (hasKey .Item "url") }}'
config_template: |
name: {{ .Item.name }}
url: {{ .Item.url }}
Authenticate against the source-of-truth endpoint using a bearer token from a file.
disabled: no
discoverer:
http:
url: https://cmdb.example.com/api/v1/netdata/jobs
bearer_token_file: /etc/netdata/secrets/cmdb-token
headers:
Accept: application/yaml
services:
- id: passthrough
match: '{{ true }}'
config_template: |
{{ .Item | toYaml }}
The response is neither valid JSON nor valid YAML. Common causes: the endpoint returned an HTML error page (check status code and Content-Type), the JSON has trailing garbage, or YAML indentation is wrong. Reproduce with curl -i to see the headers + body.
Your services: rules are not matching, or they match but the rendered template is empty. With pass-through ({{ .Item | toYaml }}), make sure each upstream item includes module: and name:. With curated rules, double-check the type checks (kindIs, hasKey).
Use tls_skip_verify: yes to bypass for testing, then mount the issuing CA and set tls_ca: /path/to/ca.crt for production.
When Netdata runs outside Kubernetes and the configured bearer_token_file points under /var/run/secrets/, missing tokens are silently ignored — this is intentional so the same config works in dev and in Helm. If you are inside k8s, the file must exist.
Want a personalised demo of Netdata for your use case?