Nvidia GPU icon

Nvidia GPU

Nvidia GPU

Plugin: go.d.plugin Module: nvidia_smi

Overview

This collector monitors GPUs performance metrics using the nvidia-smi CLI tool.

This collector is supported on all platforms.

This collector supports collecting metrics from multiple instances of this integration, including remote instances.

Default Behavior

Auto-Detection

This integration doesn’t support auto-detection.

Limits

The default configuration for this integration does not impose any limits on data collection.

Performance Impact

The default configuration for this integration is not expected to impose a significant performance impact on the system.

Setup

Prerequisites

No action required.

Configuration

File

The configuration file name for this integration is go.d/nvidia_smi.conf.

You can edit the configuration file using the edit-config script from the Netdata config directory.

cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/nvidia_smi.conf

Options

The following options can be defined globally: update_every, autodetection_retry.

Name Description Default Required
update_every Data collection frequency. 10 no
autodetection_retry Recheck interval in seconds. Zero means no recheck will be scheduled. 0 no
binary_path Path to nvidia_smi binary. The default is “nvidia_smi” and the executable is looked for in the directories specified in the PATH environment variable. nvidia_smi no
timeout The maximum duration, in seconds, to wait for an nvidia-smi command to complete. This setting applies differently based on the collector’s mode. Loop Mode: In loop mode, the timeout primarily determines how long to wait for the initial nvidia-smi execution. If the initial query takes longer than the timeout, the collector may report an error. For systems with multiple GPUs, the initial load time can sometimes be significant (e.g., 5-10 seconds). Regular Mode: If the collector is in regular mode, the timeout specifies how long to wait for each individual nvidia-smi execution. 10 no
loop_mode When enabled, nvidia-smi is executed continuously in a separate thread using the -l option. yes no

Examples

Custom binary path

The executable is not in the directories specified in the PATH environment variable.

jobs:
  - name: nvidia_smi
    binary_path: /usr/local/sbin/nvidia_smi

Metrics

Metrics grouped by scope.

The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.

Per gpu

These metrics refer to the GPU.

Labels:

Label Description
uuid GPU uuid (e.g. GPU-27b94a00-ed54-5c24-b1fd-1054085de32a)
index GPU index (nvidia_smi typically orders GPUs by PCI bus ID)
product_name GPU product name (e.g. NVIDIA A100-SXM4-40GB)

Metrics:

Metric Dimensions Unit
nvidia_smi.gpu_pcie_bandwidth_usage rx, tx B/s
nvidia_smi.gpu_pcie_bandwidth_utilization rx, tx %
nvidia_smi.gpu_fan_speed_perc fan_speed %
nvidia_smi.gpu_utilization gpu %
nvidia_smi.gpu_memory_utilization memory %
nvidia_smi.gpu_decoder_utilization decoder %
nvidia_smi.gpu_encoder_utilization encoder %
nvidia_smi.gpu_frame_buffer_memory_usage free, used, reserved B
nvidia_smi.gpu_bar1_memory_usage free, used B
nvidia_smi.gpu_temperature temperature Celsius
nvidia_smi.gpu_voltage voltage V
nvidia_smi.gpu_clock_freq graphics, video, sm, mem MHz
nvidia_smi.gpu_power_draw power_draw Watts
nvidia_smi.gpu_performance_state P0-P15 state
nvidia_smi.gpu_mig_mode_current_status enabled, disabled status
nvidia_smi.gpu_mig_devices_count mig devices

Per mig

These metrics refer to the Multi-Instance GPU (MIG).

Labels:

Label Description
uuid GPU uuid (e.g. GPU-27b94a00-ed54-5c24-b1fd-1054085de32a)
product_name GPU product name (e.g. NVIDIA A100-SXM4-40GB)
gpu_instance_id GPU instance id (e.g. 1)

Metrics:

Metric Dimensions Unit
nvidia_smi.gpu_mig_frame_buffer_memory_usage free, used, reserved B
nvidia_smi.gpu_mig_bar1_memory_usage free, used B

Alerts

There are no alerts configured by default for this integration.

Troubleshooting

Debug Mode

Important: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.

To troubleshoot issues with the nvidia_smi collector, run the go.d.plugin with the debug option enabled. The output should give you clues as to why the collector isn’t working.

  • Navigate to the plugins.d directory, usually at /usr/libexec/netdata/plugins.d/. If that’s not the case on your system, open netdata.conf and look for the plugins setting under [directories].

    cd /usr/libexec/netdata/plugins.d/
    
  • Switch to the netdata user.

    sudo -u netdata -s
    
  • Run the go.d.plugin to debug the collector:

    ./go.d.plugin -d -m nvidia_smi
    

Getting Logs

If you’re encountering problems with the nvidia_smi collector, follow these steps to retrieve logs and identify potential issues:

  • Run the command specific to your system (systemd, non-systemd, or Docker container).
  • Examine the output for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.

System with systemd

Use the following command to view logs generated since the last Netdata service restart:

journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep nvidia_smi

System without systemd

Locate the collector log file, typically at /var/log/netdata/collector.log, and use grep to filter for collector’s name:

grep nvidia_smi /var/log/netdata/collector.log

Note: This method shows logs from all restarts. Focus on the latest entries for troubleshooting current issues.

Docker Container

If your Netdata runs in a Docker container named “netdata” (replace if different), use this command:

docker logs netdata 2>&1 | grep nvidia_smi

The observability platform companies need to succeed

Sign up for free

Want a personalised demo of Netdata for your use case?

Book a Demo