Troubleshooting

25 NGINX Directives to Audit Before Opening a Support Ticket

A practical checklist to help you find and fix common NGINX issues yourself

25 NGINX Directives to Audit Before Opening a Support Ticket

You’ve built a robust application, but it’s sluggish or throwing intermittent errors. Your first instinct might be to blame the code, but often the culprit lies in the intricate web of your NGINX configuration. Before you spend hours debugging your application or drafting a support ticket, a thorough NGINX config check can save you time and reveal simple yet impactful misconfigurations.

An NGINX configuration audit isn’t just for troubleshooting; it’s a proactive step toward better performance, tighter security, and greater stability. Many common issues related to performance bottlenecks, security vulnerabilities, or proxy errors stem from overlooked or poorly optimized directives. This checklist covers 25 essential directives you should review. Working through this list can help you pinpoint problems, implement best practices, and gain a deeper understanding of how NGINX operates.

Core and Events Directives

These foundational settings control how NGINX’s processes run and handle connections. Misconfigurations here can impact overall server capacity and stability.

  1. worker_processes: Defines the number of worker processes. A common starting point is to set this to auto, which will attempt to detect the number of available CPU cores. Setting this too low can underutilize your hardware, while setting it too high can lead to resource contention.

  2. worker_connections: Sets the maximum number of simultaneous connections a worker process can open. The default (512) is often too low for production. A typical value is 1024 or higher, but remember this includes all connections (e.g., to upstream servers), not just client connections.

  3. worker_rlimit_nofile: Sets the limit on the number of open files (RLIMIT_NOFILE) for worker processes. This should be at least double your worker_connections value to account for connections to clients, upstreams, and open log files.

  4. error_log: Specifies the file and logging level for errors. A common mistake is setting this to off, which doesn’t disable logging but creates a file named “off”. Ensure it’s pointing to a valid file and set to a useful level like warn or error for production. For debugging, use debug.

  5. user: Defines the user and group for worker processes. Running workers as root is a major security risk. Always specify a non-privileged user, like nginx or www-data.

Performance and Caching Directives

These settings directly influence how quickly and efficiently NGINX serves content to your users.

  1. sendfile: Enables or disables the use of the sendfile() system call, which allows for zero-copy transfer of file data from disk to the network socket. Set this to on for serving static files to improve throughput and reduce CPU load.

  2. tcp_nopush: When sendfile is on, this directive allows sending the response header and the beginning of a file in one packet. It helps optimize packet sending. Set to on.

  3. keepalive_timeout: Sets the timeout during which a keep-alive client connection will stay open. A value of around 65s is a good balance, allowing browsers to reuse connections without holding them open for too long.

  4. keepalive_requests: Defines the maximum number of requests that can be served through one keepalive connection. The default was recently increased to 1000. This helps manage memory by periodically closing long-lived connections.

  5. gzip: Enables or disables gzip compression. Set to on to reduce the size of transmitted data for text-based assets like HTML, CSS, and JavaScript.

  6. gzip_types: Specifies the MIME types to compress in addition to text/html. Ensure you include types like application/json, application/xml, and image/svg+xml. Do not gzip already compressed formats like JPEG or PNG.

  7. open_file_cache: Configures a cache for open file descriptors, their sizes, and modification times. This significantly speeds up access to frequently requested static files. A good starting point is open_file_cache max=1000 inactive=20s;.

  8. proxy_cache_path & proxy_cache: These directives set up and enable a shared memory zone for caching responses from upstream servers. Proper proxy caching is one of the most effective ways to reduce latency and load on your backend services.

Security Directives

A misconfigured NGINX can expose your application to various attacks. Auditing these directives is critical.

  1. server_tokens: This directive displays the NGINX version on error pages and in the “Server” response header. Set it to off to prevent disclosing version information that could aid attackers.

  2. ssl_protocols: Defines the enabled SSL/TLS protocols. Disable outdated and vulnerable protocols like SSLv3, TLSv1.0, and TLSv1.1. A modern setting is TLSv1.2 TLSv1.3;.

  3. ssl_ciphers: Specifies the enabled cipher suites. Use a modern, strong cipher suite to protect against attacks. You can find generators online to help create a strong cipher string.

  4. ssl_prefer_server_ciphers: When set to on, this ensures that the server’s preferred cipher suite order is used, rather than the client’s. This gives you control over the negotiated cipher strength.

  5. add_header Strict-Transport-Security: This HTTP header (HSTS) tells browsers to only connect to your site over HTTPS, preventing protocol downgrade attacks. A common value is add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;.

  6. limit_req_zone & limit_req: These directives control request rate limiting. They are essential for protecting login forms, API endpoints, and other resources from brute-force attacks.

  7. limit_conn_zone & limit_conn: Use these to limit the number of concurrent connections from a single IP address, helping to mitigate certain types of DoS attacks.

Proxy and Upstream Directives

When using NGINX as a reverse proxy, these settings are vital for ensuring reliable and performant communication with your backend services.

  1. proxy_pass: Sets the address of the upstream server or server group. A common mistake is a trailing slash mismatch. proxy_pass http://upstream/ and proxy_pass http://upstream; behave differently in how they map the request URI.

  2. proxy_set_header: Allows you to redefine or append fields to the request header passed to the upstream server. It’s crucial to pass the Host and client IP address correctly:

    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    
  3. proxy_buffering: Toggling this off can seem like a good way to reduce latency for long-running requests, but it can lead to performance issues if not handled carefully. For most cases, leaving it on is best practice.

  4. proxy_connect_timeout: Defines the timeout for establishing a connection with an upstream server. If your backend is occasionally slow to respond to new connections, the default of 60s might be too long, causing requests to hang.

  5. upstream keepalive: This directive, used within an upstream block, enables a cache of keepalive connections to your upstream servers. This significantly reduces the overhead of creating new TCP connections for every request, lowering latency and CPU usage.

Reviewing these 25 directives provides a solid foundation for a healthy and efficient NGINX setup. By moving from default values to a tuned configuration, you can solve existing problems, prevent future ones, and ensure your server is running at its full potential.

For a deeper, real-time view of how these directives impact your NGINX performance, monitoring is key. Netdata can automatically collect and visualize hundreds of NGINX metrics, allowing you to see the effect of your configuration changes instantly. Start your NGINX performance audit with Netdata today.