Docker DNS not working inside containers

Your application logs show connection timeouts. Health checks against dependency names are failing. curl from inside a container returns “Could not resolve host” while the same name resolves fine on the host. DNS inside Docker is not a simple passthrough to the host resolver. It is a stack of namespace-specific forwarders, embedded resolvers, and inherited configuration that breaks in specific, repeatable ways.

This guide will help you determine whether the failure is a missing embedded DNS, a poisoned resolv.conf, an upstream forwarding issue, or a version regression. You will be able to distinguish between inter-container name resolution failures and external lookup failures, identify the root cause with safe read-only checks, and apply the correct fix without guessing.

What this means

On user-defined bridge networks, including the networks Docker Compose creates automatically, Docker injects an embedded DNS forwarder at 127.0.0.11:53 into each container’s /etc/resolv.conf. This resolver handles container-to-container name lookups and forwards external queries to upstream servers that Docker cached from the host at container creation time.

On the default bridge network, there is no embedded DNS. Containers receive a snapshot of the host’s /etc/resolv.conf at startup. They cannot resolve each other by container name unless you use the deprecated --link flag.

Because the DNS configuration is snapshotted when the container starts, dynamic changes on the host, such as a VPN connecting or systemd-resolved switching upstream servers, are invisible to running containers. The embedded DNS is also a forwarder, not a recursive resolver. If the upstream it cached becomes unreachable, all external resolution fails, even though the container’s network stack is otherwise healthy.

Common causes

CauseWhat it looks likeFirst thing to check
systemd-resolved stub copied into containerresolv.conf shows nameserver 127.0.0.53; DNS returns SERVFAIL or times outHost’s /etc/resolv.conf symlink target and daemon-level DNS config
Default bridge network limitationsContainers cannot resolve each other by name, only by IPContainer network mode via docker inspect
Container attached to wrong networkOne service cannot reach another by name while other pairs workdocker network inspect membership for both containers
ndots and search domain stormsSlow startup, high query volume, apparent hangs on unqualified namesoptions line inside container /etc/resolv.conf
Upstream DNS unreachableExternal names fail; container-to-container names may still workReachability of upstream IPs from the container namespace
Docker 29.1.0 regressionPre-existing containers lost DNS after upgrading from 29.0.x; new containers unaffecteddocker version output; recreation fixes it
Firewall or VPN intercepting port 53IP connectivity works but all name resolution failsping -c 3 1.1.1.1 works from inside the container
IPv6 AAAA query against embedded DNSSERVFAIL on HTTPS/TYPE65 lookups for internal hostnamesQuery type and whether the app expects AAAA synthesis

Quick checks

Run these read-only checks before making changes.

# Check the container's current DNS configuration
docker exec <container> cat /etc/resolv.conf

# Check which network mode the container uses
docker inspect --format '{{.HostConfig.NetworkMode}}' <container>

# Test name resolution from inside the container
docker exec <container> nslookup <hostname>

# Check if the host uses systemd-resolved
systemctl is-active systemd-resolved

# Check the actual file Docker copied resolv.conf from
readlink -f /etc/resolv.conf

# Check Docker version for known regressions
docker version --format '{{.Server.Version}}'

# Test embedded DNS directly in the container's network namespace
CONTAINER_PID=$(docker inspect --format '{{.State.Pid}}' <container>)
nsenter -t ${CONTAINER_PID} -n dig @127.0.0.11 <hostname>

# Check container logs for DNS-related errors
docker logs <container> 2>&1 | grep -iE "dns|resolve|lookup|nxdomain"

# Verify layer-3 connectivity independent of DNS
docker exec <container> ping -c 3 1.1.1.1

How to diagnose it

Follow this flow from symptom to root cause.

  1. Determine the container’s network mode. Run docker inspect --format '{{.HostConfig.NetworkMode}}' <container>. If the result is default, the container is on the default bridge. Name resolution between containers by name is not supported here. If you need service discovery, move the containers to a user-defined network or use Docker Compose, which creates one automatically.

  2. Inspect /etc/resolv.conf inside the container. If the nameserver is 127.0.0.11, the embedded DNS is in use. If it is 127.0.0.53, Docker copied the systemd-resolved stub address into the container namespace. Inside the container, 127.0.0.53 refers to the container’s own loopback, not the host’s resolver. This produces SERVFAIL or timeouts.

  3. Test internal versus external names. Run nslookup for another container name on the same network, then for an external domain like cloudflare.com. If internal names fail but external names work, the embedded DNS may not know about the target container. Verify both containers are attached to the same network. If both fail, the embedded DNS cannot reach its upstream forwarders.

  4. Verify IP connectivity. Run docker exec <container> ping -c 3 1.1.1.1. If this fails, the problem is not DNS. Check network interfaces, firewall rules, and bridge status. If IP works but DNS fails, the problem is strictly in name resolution.

  5. Check the host’s live resolver state. On hosts with systemd-resolved, /etc/resolv.conf often symlinks to /run/systemd/resolve/stub-resolv.conf. Docker copies this file at container creation. The real upstream servers live in /run/systemd/resolve/resolv.conf. If the host’s stub is in the container, configure Docker to use real upstream IPs or the real servers file.

  6. Look for ndots misconfiguration. If options ndots:5 appears in the container’s resolv.conf, unqualified names like postgres trigger multiple suffix-appended lookups before falling back. This causes slow starts and CPU-bound blocking in some runtimes. Docker’s default on user-defined networks is saner, but host inheritance or Kubernetes overrides can push ndots:5 into containers.

  7. Check Docker version against known regressions. Docker 29.1.0 introduced a regression where pre-existing containers lost functional DNS after upgrade. Their resolv.conf still showed 127.0.0.11, but queries returned SERVFAIL. This was fixed in 29.1.1. If you are on 29.1.0, recreate the containers or upgrade the engine.

  8. Review daemon logs for embedded DNS or libnetwork errors. Run journalctl -u docker.service | grep -iE "dns|error|network". Errors here can reveal a hung embedded DNS, corrupted iptables rules, or plugin failures that do not show up inside the container.

Metrics and signals to monitor

DNS failures rarely happen in isolation. Correlate these signals to distinguish a DNS incident from a general network or daemon health issue.

SignalWhy it mattersWarning sign
Container network errorsveth pair issues or bridge drops manifest as packet loss that breaks DNS queriesrx_errors or tx_errors increasing
Docker DNS resolution latencyEmbedded DNS at 127.0.0.11 is part of dockerd; slowness here degrades application startupResolution time >100 ms for external names
Container restart countApplications that depend on DNS for service discovery may enter crash loops when resolution failsRestart count increasing with exit code 1
Docker daemon response latencyA stressed or deadlocked daemon slows the embedded DNS forwarder/_ping or docker ps latency >500 ms sustained
Docker bridge connection countconntrack exhaustion silently drops packets, including UDP port 53nf_conntrack_count / nf_conntrack_max >70%
Container OOM killed statusDNS lookup storms from ndots misconfiguration can spike memory or CPUOOMKilled: true in container state

Fixes

If the cause is systemd-resolved

Configure explicit upstream DNS servers in /etc/docker/daemon.json:

{
  "dns": ["1.1.1.1", "8.8.8.8"],
  "dns-search": ["corp.example"]
}

Restart Docker with systemctl restart docker. Existing containers are not affected. Recreate them to pick up the new resolver. Alternatively, configure Docker to read from /run/systemd/resolve/resolv.conf, which contains the real upstream addresses, instead of the stub file.

If the cause is default bridge network limitations

Move containers to a user-defined bridge network. Docker Compose does this automatically for every project. Do not use --link; it is deprecated and absent from modern Compose.

If the cause is ndots or search domains

Add "dns-opts": ["ndots:0"] or ["ndots:1"] to /etc/docker/daemon.json. This prevents suffix expansion on unqualified names. Recreate containers after changing daemon-level DNS options.

If the cause is upstream unreachability

Verify the upstream servers configured in daemon.json are reachable from the host. If the host relies on a VPN that dynamically rewrites resolvers, be aware that running containers continue using the old set. You must recreate containers after the VPN changes host DNS, or use fixed daemon-level DNS IPs that are valid in all network states.

If the cause is the Docker 29.1.0 regression

Upgrade to Docker 29.1.1 or later. As a workaround without upgrading, recreate all affected containers. For Compose workloads, run docker compose down && docker compose up -d.

If the cause is firewall or VPN interception

Third-party firewalls that intercept outbound DNS on port 53 can block Docker’s embedded DNS from reaching upstream resolvers. If ping -c 3 1.1.1.1 works from inside the container but name resolution fails, whitelist the container network’s path to port 53 or switch to an upstream that is not intercepted.

If the cause is IPv6 AAAA queries

Docker’s embedded DNS handles A and PTR records for container names. It does not synthesize AAAA records. Applications that issue HTTPS or TYPE65 queries expecting IPv6 answers for internal hostnames will receive SERVFAIL. Use IPv4-only lookups for internal container names, or handle the absence of AAAA at the application level.

Prevention

  • Use user-defined networks for all multi-container workloads. This enables the embedded DNS and service discovery by default.
  • Set explicit DNS in daemon.json. Never rely on the host’s dynamic resolv.conf if the host runs systemd-resolved. Point Docker to real, stable upstream IPs.
  • Keep ndots low. Set "dns-opts": ["ndots:0"] in daemon.json to avoid lookup storms in containerized applications.
  • Recreate containers after host network changes. Docker’s DNS snapshot does not update dynamically. Treat VPN connections and resolver changes as events that require container recreation.
  • Test Docker engine upgrades in staging. Version regressions like 29.1.0 can break DNS for existing containers. Validate upgrades with live workloads before production rollout.
  • Monitor the signals. Alert on container restart counts, daemon latency, and DNS resolution latency from within representative containers.

How Netdata helps

Netdata collects the signals that surround a DNS incident, letting you correlate rather than guess.

  • Container network error charts show rx_errors and tx_dropped per container. A jump here alongside DNS timeouts points to a veth or bridge issue, not a resolver bug.
  • Docker daemon latency monitoring tracks how fast the API responds. Since the embedded DNS lives inside dockerd, rising /_ping latency is an early warning that DNS forwarding will degrade.
  • Container restart count and exit code tracking reveal when an application is crash-looping because it cannot resolve a dependency name.
  • conntrack utilization on the host shows whether the connection tracker is approaching exhaustion, which silently drops UDP DNS packets before they ever reach a resolver.
  • CPU throttling metrics catch the secondary effect of ndots-induced lookup storms, where a container spends excessive time in resolver syscalls and hits its CFS quota.