Plugin: go.d.plugin Module: hfs
This collector monitors HDFS nodes.
Netdata accesses HDFS metrics over Java Management Extensions
(JMX) through the web interface of an HDFS daemon.
This collector is supported on all platforms.
This collector supports collecting metrics from multiple instances of this integration, including remote instances.
This integration doesn’t support auto-detection.
The default configuration for this integration does not impose any limits on data collection.
The default configuration for this integration is not expected to impose a significant performance impact on the system.
No action required.
The configuration file name for this integration is go.d/hdfs.conf
.
You can edit the configuration file using the edit-config
script from the
Netdata config directory.
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/hdfs.conf
The following options can be defined globally: update_every, autodetection_retry.
Name | Description | Default | Required |
---|---|---|---|
update_every | Data collection frequency. | 1 | no |
autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |
url | Server URL. | http://127.0.0.1:9870/jmx | yes |
timeout | HTTP request timeout. | 1 | no |
username | Username for basic HTTP authentication. | no | |
password | Password for basic HTTP authentication. | no | |
proxy_url | Proxy URL. | no | |
proxy_username | Username for proxy basic HTTP authentication. | no | |
proxy_password | Password for proxy basic HTTP authentication. | no | |
method | HTTP request method. | GET | no |
body | HTTP request body. | no | |
headers | HTTP request headers. | no | |
not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |
tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |
tls_ca | Certification authority that the client uses when verifying the server’s certificates. | no | |
tls_cert | Client TLS certificate. | no | |
tls_key | Client TLS key. | no |
A basic example configuration.
jobs:
- name: local
url: http://127.0.0.1:9870/jmx
Basic HTTP authentication.
jobs:
- name: local
url: http://127.0.0.1:9870/jmx
username: username
password: password
Do not validate server certificate chain and hostname.
jobs:
- name: local
url: https://127.0.0.1:9870/jmx
tls_skip_verify: yes
Note: When you define multiple jobs, their names must be unique.
Collecting metrics from local and remote instances.
jobs:
- name: local
url: http://127.0.0.1:9870/jmx
- name: remote
url: http://192.0.2.1:9870/jmx
Metrics grouped by scope.
The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
These metrics refer to the entire monitored application.
This scope has no labels.
Metrics:
Metric | Dimensions | Unit | DataNode | NameNode |
---|---|---|---|---|
hdfs.heap_memory | committed, used | MiB | • | • |
hdfs.gc_count_total | gc | events/s | • | • |
hdfs.gc_time_total | ms | ms | • | • |
hdfs.gc_threshold | info, warn | events/s | • | • |
hdfs.threads | new, runnable, blocked, waiting, timed_waiting, terminated | num | • | • |
hdfs.logs_total | info, error, warn, fatal | logs/s | • | • |
hdfs.rpc_bandwidth | received, sent | kilobits/s | • | • |
hdfs.rpc_calls | calls | calls/s | • | • |
hdfs.open_connections | open | connections | • | • |
hdfs.call_queue_length | length | num | • | • |
hdfs.avg_queue_time | time | ms | • | • |
hdfs.avg_processing_time | time | ms | • | • |
hdfs.capacity | remaining, used | KiB | • | |
hdfs.used_capacity | dfs, non_dfs | KiB | • | |
hdfs.load | load | load | • | |
hdfs.volume_failures_total | failures | events/s | • | |
hdfs.files_total | files | num | • | |
hdfs.blocks_total | blocks | num | • | |
hdfs.blocks | corrupt, missing, under_replicated | num | • | |
hdfs.data_nodes | live, dead, stale | num | • | |
hdfs.datanode_capacity | remaining, used | KiB | • | |
hdfs.datanode_used_capacity | dfs, non_dfs | KiB | • | |
hdfs.datanode_failed_volumes | failed volumes | num | • | |
hdfs.datanode_bandwidth | reads, writes | KiB/s | • |
The following alerts are available:
Alert name | On metric | Description |
---|---|---|
hdfs_capacity_usage | hdfs.capacity | summary datanodes space capacity utilization |
hdfs_missing_blocks | hdfs.blocks | number of missing blocks |
hdfs_stale_nodes | hdfs.data_nodes | number of datanodes marked stale due to delayed heartbeat |
hdfs_dead_nodes | hdfs.data_nodes | number of datanodes which are currently dead |
hdfs_num_failed_volumes | hdfs.num_failed_volumes | number of failed volumes |
Important: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.
To troubleshoot issues with the hfs
collector, run the go.d.plugin
with the debug option enabled. The output
should give you clues as to why the collector isn’t working.
Navigate to the plugins.d
directory, usually at /usr/libexec/netdata/plugins.d/
. If that’s not the case on
your system, open netdata.conf
and look for the plugins
setting under [directories]
.
cd /usr/libexec/netdata/plugins.d/
Switch to the netdata
user.
sudo -u netdata -s
Run the go.d.plugin
to debug the collector:
./go.d.plugin -d -m hfs
If you’re encountering problems with the hfs
collector, follow these steps to retrieve logs and identify potential issues:
Use the following command to view logs generated since the last Netdata service restart:
journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep hfs
Locate the collector log file, typically at /var/log/netdata/collector.log
, and use grep
to filter for collector’s name:
grep hfs /var/log/netdata/collector.log
Note: This method shows logs from all restarts. Focus on the latest entries for troubleshooting current issues.
If your Netdata runs in a Docker container named “netdata” (replace if different), use this command:
docker logs netdata 2>&1 | grep hfs