Monit icon

Monit

Monit

Plugin: go.d.plugin Module: monit

Overview

This collector monitors status of Monit’s service checks.

It sends HTTP requests to the Monit /_status?format=xml&level=full endpoint.

This collector is supported on all platforms.

This collector supports collecting metrics from multiple instances of this integration, including remote instances.

Default Behavior

Auto-Detection

By default, it detects Monit instances running on localhost that are listening on port 2812. On startup, it tries to collect metrics from:

  • http://127.0.0.1:2812

Limits

The default configuration for this integration does not impose any limits on data collection.

Performance Impact

The default configuration for this integration is not expected to impose a significant performance impact on the system.

Setup

You can configure the monit collector in two ways:

Method Best for How to
UI Fast setup without editing files Go to Nodes → Configure this node → Collectors → Jobs, search for monit, then click + to add a job.
File If you prefer configuring via file, or need to automate deployments (e.g., with Ansible) Edit go.d/monit.conf and add a job.

:::important

UI configuration requires paid Netdata Cloud plan.

:::

Prerequisites

Enable TCP PORT

See Syntax for TCP port for details.

Configuration

Options

The following options can be defined globally: update_every, autodetection_retry.

Group Option Description Default Required
Collection update_every Data collection interval (seconds). 1 no
autodetection_retry Autodetection retry interval (seconds). Set 0 to disable. 0 no
Target url Target endpoint URL. http://127.0.0.1:2812 yes
timeout HTTP request timeout (seconds). 1 no
HTTP Auth username Username for Basic HTTP authentication. admin no
password Password for Basic HTTP authentication. monit no
bearer_token_file Path to a file containing a bearer token (used for Authorization: Bearer). no
TLS tls_skip_verify Skip TLS certificate and hostname verification (insecure). no no
tls_ca Path to CA bundle used to validate the server certificate. no
tls_cert Path to client TLS certificate (for mTLS). no
tls_key Path to client TLS private key (for mTLS). no
Proxy proxy_url HTTP proxy URL. no
proxy_username Username for proxy Basic HTTP authentication. no
proxy_password Password for proxy Basic HTTP authentication. no
Request method HTTP method to use. GET no
body Request body (e.g., for POST/PUT). no
headers Additional HTTP headers (one per line as key: value). no
not_follow_redirects Do not follow HTTP redirects. no no
force_http2 Force HTTP/2 (including h2c over TCP). no no
Virtual Node vnode Associates this data collection job with a Virtual Node. no

via UI

Configure the monit collector from the Netdata web interface:

  1. Go to Nodes.
  2. Select the node where you want the monit data-collection job to run and click the :gear: (Configure this node). That node will run the data collection.
  3. The Collectors → Jobs view opens by default.
  4. In the Search box, type monit (or scroll the list) to locate the monit collector.
  5. Click the + next to the monit collector to add a new job.
  6. Fill in the job fields, then click Test to verify the configuration and Submit to save.
    • Test runs the job with the provided settings and shows whether data can be collected.
    • If it fails, an error message appears with details (for example, connection refused, timeout, or command execution errors), so you can adjust and retest.

via File

The configuration file name for this integration is go.d/monit.conf.

The file format is YAML. Generally, the structure is:

update_every: 1
autodetection_retry: 0
jobs:
  - name: some_name1
  - name: some_name2

You can edit the configuration file using the edit-config script from the Netdata config directory.

cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/monit.conf
Examples
HTTP authentication

Basic HTTP authentication.

jobs:
  - name: local
    url: http://127.0.0.1:2812
    username: admin
    password: monit

HTTPS with self-signed certificate

With enabled HTTPS and self-signed certificate.

jobs:
  - name: local
    url: http://127.0.0.1:2812
    tls_skip_verify: yes

Multi-instance

Note: When you define multiple jobs, their names must be unique.

Collecting metrics from local and remote instances.

jobs:
  - name: local
    url: http://127.0.0.1:2812

  - name: remote
    url: http://192.0.2.1:2812

Metrics

Metrics grouped by scope.

The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.

Per service

These metrics refer to the monitored Service.

Labels:

Label Description
server_hostname Hostname of the Monit server.
service_check_name Service check name.
service_check_type Service check type.

Metrics:

Metric Dimensions Unit
monit.service_check_status ok, error, initializing, not_monitored status

Alerts

There are no alerts configured by default for this integration.

Troubleshooting

Debug Mode

Important: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.

To troubleshoot issues with the monit collector, run the go.d.plugin with the debug option enabled. The output should give you clues as to why the collector isn’t working.

  • Navigate to the plugins.d directory, usually at /usr/libexec/netdata/plugins.d/. If that’s not the case on your system, open netdata.conf and look for the plugins setting under [directories].

    cd /usr/libexec/netdata/plugins.d/
    
  • Switch to the netdata user.

    sudo -u netdata -s
    
  • Run the go.d.plugin to debug the collector:

    ./go.d.plugin -d -m monit
    

    To debug a specific job:

    ./go.d.plugin -d -m monit -j jobName
    

Getting Logs

If you’re encountering problems with the monit collector, follow these steps to retrieve logs and identify potential issues:

  • Run the command specific to your system (systemd, non-systemd, or Docker container).
  • Examine the output for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.

System with systemd

Use the following command to view logs generated since the last Netdata service restart:

journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep monit

System without systemd

Locate the collector log file, typically at /var/log/netdata/collector.log, and use grep to filter for collector’s name:

grep monit /var/log/netdata/collector.log

Note: This method shows logs from all restarts. Focus on the latest entries for troubleshooting current issues.

Docker Container

If your Netdata runs in a Docker container named “netdata” (replace if different), use this command:

docker logs netdata 2>&1 | grep monit

The observability platform companies need to succeed

Sign up for free

Want a personalised demo of Netdata for your use case?

Book a Demo