Docker container cannot connect to the internet

The host may be online while Docker’s bridge, iptables rules, DNS proxy, or connection tracking table is in a broken state. Applications fail with connection timeouts, package managers stall, and external health checks return unhealthy. Isolate whether the failure is DNS, routing, packet filtering, or daemon state before fixing it.

What this means

Outbound connectivity from a container traverses the container’s network namespace, a veth pair attached to a bridge (docker0 or user-defined), iptables NAT and filter rules managed by Docker’s libnetwork, the host’s routing table, and the upstream physical interface. On user-defined networks, Docker’s embedded DNS resolver at 127.0.0.11 proxies queries to the host’s configured resolvers. On the default bridge network, there is no embedded DNS; containers inherit the host’s /etc/resolv.conf directly. A failure at any layer produces the same symptom: requests time out.

Common causes

CauseWhat it looks likeFirst thing to check
DNS misconfigurationnslookup fails but ping to an IP works/etc/resolv.conf inside the container
iptables rule corruption or firewall conflictSilent packet drops; no error in container logsiptables -L DOCKER -n -v and iptables -t nat -L DOCKER -n -v
Docker bridge downAll containers on the same bridge lose connectivityip link show docker0 and cat /sys/class/net/docker0/operstate
conntrack table exhaustionNew connections hang; existing TCP streams continuecat /proc/sys/net/netfilter/nf_conntrack_count vs nf_conntrack_max
Embedded DNS stallIntermittent name resolution on user-defined networksDocker daemon responsiveness via /_ping
Orphaned veth or network namespace leakSome containers work, others do not after churnNetwork interface count vs running container count

Quick checks

Run these from the host to isolate the failure layer. Most require root.

# 1. Test IP reachability from inside the container
docker exec <container_id> ping -c 3 <external_ip>

# 2. Test DNS resolution explicitly
docker exec <container_id> nslookup <external_host>

# 3. Inspect DNS configuration inside the container
docker exec <container_id> cat /etc/resolv.conf

# 4. Check Docker bridge interface state
ip link show docker0
cat /sys/class/net/docker0/operstate

# 5. Verify Docker iptables rules are present
iptables -L DOCKER -n -v
iptables -t nat -L DOCKER -n -v

# 6. Check conntrack table utilization
cat /proc/sys/net/netfilter/nf_conntrack_count
cat /proc/sys/net/netfilter/nf_conntrack_max

# 7. Look for interface errors in the container namespace
CONTAINER_PID=$(docker inspect --format '{{.State.Pid}}' <container_id>)
nsenter -t ${CONTAINER_PID} -n ip -s link show

# 8. Confirm the daemon is responsive
curl -s --max-time 5 --unix-socket /var/run/docker.sock http://localhost/_ping

How to diagnose it

  1. Determine whether the failure is DNS or IP. If ping to an external IP works but nslookup does not, the problem is DNS. Skip to the DNS steps. If both fail, the problem is lower in the stack.

  2. Check which network the container uses. Run docker inspect <container_id> and check NetworkSettings.Networks. If the container is on a user-defined network, DNS is handled by Docker’s embedded resolver at 127.0.0.11. If it is on the default bridge, Docker copies the host’s /etc/resolv.conf into the container.

  3. Inspect DNS configuration. Run docker exec <container_id> cat /etc/resolv.conf. On user-defined networks, the nameserver should be 127.0.0.11. If the host’s /etc/resolv.conf points to a local loopback address such as 127.0.0.53 (systemd-resolved) or 127.0.0.1, containers on the default bridge may not be able to reach it because loopback addresses are not routable from inside the container namespace.

  4. Check bridge interface health. Run ip link show docker0 and cat /sys/class/net/docker0/operstate. The bridge must be up. If the bridge is down, all containers attached to it lose connectivity. Also verify that the host itself has working upstream connectivity; Docker depends on the host’s default route and physical interface.

  5. Inspect iptables rules. Run iptables -L DOCKER -n -v and iptables -t nat -L DOCKER -n -v. Docker dynamically inserts these rules. If another tool has flushed or reordered iptables, traffic from the bridge may be dropped before it reaches the host’s outbound interface. If the DOCKER chains are missing or empty, Docker has lost control of the firewall.

  6. Check for conntrack exhaustion. Compare /proc/sys/net/netfilter/nf_conntrack_count to /proc/sys/net/netfilter/nf_conntrack_max. When the table fills, new outbound connections are silently dropped. There is no RST or ICMP error; the SYN simply disappears. This is a common failure mode on busy hosts with many short-lived connections.

  7. Check for network namespace or veth leaks. A container destroyed while the daemon is under stress may leave behind an orphaned veth pair or network namespace. Compare the number of running containers to the number of network interfaces (ip link show | wc -l). A large gap indicates leaked resources that can interfere with new containers.

  8. Verify daemon responsiveness. Run curl -s --max-time 5 --unix-socket /var/run/docker.sock http://localhost/_ping. The embedded DNS server runs inside dockerd. If the daemon is under memory pressure or internal lock contention, DNS queries will stall before other symptoms appear. Slow pings are an early warning.

  9. Reinitialize networking if needed. If DNS configuration is stale because the container started before a network change, restarting the container re-creates its network namespace and /etc/resolv.conf. If iptables rules are missing, restarting the Docker daemon reinserts them and recreates bridge configuration if necessary. With live-restore enabled, running containers survive the restart, though there may be a brief network interruption.

Metrics and signals to monitor

SignalWhy it mattersWarning sign
Bridge interface stateA down bridge breaks all attached containersdocker0 operstate is not up
Container network errorsIndicates veth pair issues, bridge saturation, or rule dropsrx_errors or tx_dropped increasing
Docker DNS resolution latencyEmbedded DNS at 127.0.0.11 is a common bottleneckResolution latency spikes or frequent timeouts
conntrack utilizationTable exhaustion causes silent connection dropsnf_conntrack_count / nf_conntrack_max above 70%
Docker daemon responsivenessA stalled daemon stalls DNS with it/_ping latency spikes or failures
Container network throughputSudden flatline indicates namespace failuretx_bytes drops to zero while the process is active
Bridge connection countHigh connection counts increase iptables and conntrack loadConnection count growing without container growth

Fixes

If the cause is DNS misconfiguration

  • Containers on user-defined networks: ensure /etc/resolv.conf inside the container shows nameserver 127.0.0.11. If an application overwrites this file, fix the application or mount a read-only resolv.conf. Restart the container to regenerate the file from Docker.
  • Containers on the default bridge: Docker copies the host’s resolv.conf. If the host uses a loopback resolver, the container may not reach it. Switch the host to use an upstream nameserver, or use user-defined networks where Docker can proxy DNS.

If the cause is iptables or firewall conflict

  • Do not run iptables -F or equivalent while Docker is managing containers. If you use firewalld, ufw, or custom scripts that manipulate iptables, restart Docker after any firewall change so it can reinsert its rules into the filter and nat tables.
  • If the DOCKER chains are missing from iptables -L or iptables -t nat -L, restart the Docker daemon. With live-restore: true, running containers survive the restart.

If the cause is conntrack exhaustion

  • Increase net.netfilter.nf_conntrack_max using sysctl:
    sysctl -w net.netfilter.nf_conntrack_max=<value>
    
    To persist, add the setting to /etc/sysctl.conf or a file under /etc/sysctl.d/.
  • Reduce unnecessary short-lived connections from inside containers. Each new outbound connection through NAT consumes a conntrack entry.
  • Review outbound NAT load. Heavy outbound traffic through the bridge MASQUERADE rule creates conntrack state even without published ports.

If the cause is bridge or veth failure

  • A daemon restart recreates docker0 and reattaches routing rules. If live-restore is enabled, this is safe for running containers, though expect a brief network interruption.
  • If veth pairs are orphaned from unclean container removals, manual interface cleanup with ip link del <veth_name> or a daemon restart may be required to remove stale links.

If the cause is host routing failure

  • Repair upstream connectivity on the host before debugging Docker. Containers rely on the host’s default gateway and physical interface.

Prevention

  • Monitor conntrack utilization on any host with heavy container traffic. Alert when usage crosses 70% of the maximum.
  • Treat iptables as a shared resource between Docker and host firewalls. After any firewall change, validate that Docker’s chains are still present.
  • Prefer user-defined bridge networks over the default bridge. User-defined networks enable Docker’s embedded DNS server, which is more reliable than inheriting the host’s resolv.conf.
  • Monitor Docker daemon latency, not just process existence. DNS is served by dockerd; rising /_ping latency predicts DNS stalls before they become outages.
  • Avoid relying on loopback resolvers on the host for containers on the default bridge. Those addresses are not reachable from inside a container network namespace.

How Netdata helps

  • Correlates per-container network I/O with host-level network errors and drops to distinguish container bugs from bridge or veth issues.
  • Tracks nf_conntrack_count against the host maximum and alerts before silent drops begin.
  • Monitors Docker daemon /_ping latency to detect DNS stalls caused by daemon stress.
  • Surfaces per-container network error counters, including rx_dropped and tx_dropped, to identify packet loss at the veth or bridge layer.
  • Charts container CPU throttling alongside network throughput to reveal if network processing is being starved by CFS bandwidth limits.