Docker container cannot connect to the internet
The host may be online while Docker’s bridge, iptables rules, DNS proxy, or connection tracking table is in a broken state. Applications fail with connection timeouts, package managers stall, and external health checks return unhealthy. Isolate whether the failure is DNS, routing, packet filtering, or daemon state before fixing it.
What this means
Outbound connectivity from a container traverses the container’s network namespace, a veth pair attached to a bridge (docker0 or user-defined), iptables NAT and filter rules managed by Docker’s libnetwork, the host’s routing table, and the upstream physical interface. On user-defined networks, Docker’s embedded DNS resolver at 127.0.0.11 proxies queries to the host’s configured resolvers. On the default bridge network, there is no embedded DNS; containers inherit the host’s /etc/resolv.conf directly. A failure at any layer produces the same symptom: requests time out.
Common causes
| Cause | What it looks like | First thing to check |
|---|---|---|
| DNS misconfiguration | nslookup fails but ping to an IP works | /etc/resolv.conf inside the container |
| iptables rule corruption or firewall conflict | Silent packet drops; no error in container logs | iptables -L DOCKER -n -v and iptables -t nat -L DOCKER -n -v |
| Docker bridge down | All containers on the same bridge lose connectivity | ip link show docker0 and cat /sys/class/net/docker0/operstate |
| conntrack table exhaustion | New connections hang; existing TCP streams continue | cat /proc/sys/net/netfilter/nf_conntrack_count vs nf_conntrack_max |
| Embedded DNS stall | Intermittent name resolution on user-defined networks | Docker daemon responsiveness via /_ping |
| Orphaned veth or network namespace leak | Some containers work, others do not after churn | Network interface count vs running container count |
Quick checks
Run these from the host to isolate the failure layer. Most require root.
# 1. Test IP reachability from inside the container
docker exec <container_id> ping -c 3 <external_ip>
# 2. Test DNS resolution explicitly
docker exec <container_id> nslookup <external_host>
# 3. Inspect DNS configuration inside the container
docker exec <container_id> cat /etc/resolv.conf
# 4. Check Docker bridge interface state
ip link show docker0
cat /sys/class/net/docker0/operstate
# 5. Verify Docker iptables rules are present
iptables -L DOCKER -n -v
iptables -t nat -L DOCKER -n -v
# 6. Check conntrack table utilization
cat /proc/sys/net/netfilter/nf_conntrack_count
cat /proc/sys/net/netfilter/nf_conntrack_max
# 7. Look for interface errors in the container namespace
CONTAINER_PID=$(docker inspect --format '{{.State.Pid}}' <container_id>)
nsenter -t ${CONTAINER_PID} -n ip -s link show
# 8. Confirm the daemon is responsive
curl -s --max-time 5 --unix-socket /var/run/docker.sock http://localhost/_ping
How to diagnose it
Determine whether the failure is DNS or IP. If
pingto an external IP works butnslookupdoes not, the problem is DNS. Skip to the DNS steps. If both fail, the problem is lower in the stack.Check which network the container uses. Run
docker inspect <container_id>and checkNetworkSettings.Networks. If the container is on a user-defined network, DNS is handled by Docker’s embedded resolver at 127.0.0.11. If it is on the default bridge, Docker copies the host’s/etc/resolv.confinto the container.Inspect DNS configuration. Run
docker exec <container_id> cat /etc/resolv.conf. On user-defined networks, the nameserver should be127.0.0.11. If the host’s/etc/resolv.confpoints to a local loopback address such as127.0.0.53(systemd-resolved) or127.0.0.1, containers on the default bridge may not be able to reach it because loopback addresses are not routable from inside the container namespace.Check bridge interface health. Run
ip link show docker0andcat /sys/class/net/docker0/operstate. The bridge must beup. If the bridge is down, all containers attached to it lose connectivity. Also verify that the host itself has working upstream connectivity; Docker depends on the host’s default route and physical interface.Inspect iptables rules. Run
iptables -L DOCKER -n -vandiptables -t nat -L DOCKER -n -v. Docker dynamically inserts these rules. If another tool has flushed or reordered iptables, traffic from the bridge may be dropped before it reaches the host’s outbound interface. If the DOCKER chains are missing or empty, Docker has lost control of the firewall.Check for conntrack exhaustion. Compare
/proc/sys/net/netfilter/nf_conntrack_countto/proc/sys/net/netfilter/nf_conntrack_max. When the table fills, new outbound connections are silently dropped. There is no RST or ICMP error; the SYN simply disappears. This is a common failure mode on busy hosts with many short-lived connections.Check for network namespace or veth leaks. A container destroyed while the daemon is under stress may leave behind an orphaned veth pair or network namespace. Compare the number of running containers to the number of network interfaces (
ip link show | wc -l). A large gap indicates leaked resources that can interfere with new containers.Verify daemon responsiveness. Run
curl -s --max-time 5 --unix-socket /var/run/docker.sock http://localhost/_ping. The embedded DNS server runs inside dockerd. If the daemon is under memory pressure or internal lock contention, DNS queries will stall before other symptoms appear. Slow pings are an early warning.Reinitialize networking if needed. If DNS configuration is stale because the container started before a network change, restarting the container re-creates its network namespace and
/etc/resolv.conf. If iptables rules are missing, restarting the Docker daemon reinserts them and recreates bridge configuration if necessary. Withlive-restoreenabled, running containers survive the restart, though there may be a brief network interruption.
Metrics and signals to monitor
| Signal | Why it matters | Warning sign |
|---|---|---|
| Bridge interface state | A down bridge breaks all attached containers | docker0 operstate is not up |
| Container network errors | Indicates veth pair issues, bridge saturation, or rule drops | rx_errors or tx_dropped increasing |
| Docker DNS resolution latency | Embedded DNS at 127.0.0.11 is a common bottleneck | Resolution latency spikes or frequent timeouts |
| conntrack utilization | Table exhaustion causes silent connection drops | nf_conntrack_count / nf_conntrack_max above 70% |
| Docker daemon responsiveness | A stalled daemon stalls DNS with it | /_ping latency spikes or failures |
| Container network throughput | Sudden flatline indicates namespace failure | tx_bytes drops to zero while the process is active |
| Bridge connection count | High connection counts increase iptables and conntrack load | Connection count growing without container growth |
Fixes
If the cause is DNS misconfiguration
- Containers on user-defined networks: ensure
/etc/resolv.confinside the container showsnameserver 127.0.0.11. If an application overwrites this file, fix the application or mount a read-only resolv.conf. Restart the container to regenerate the file from Docker. - Containers on the default bridge: Docker copies the host’s resolv.conf. If the host uses a loopback resolver, the container may not reach it. Switch the host to use an upstream nameserver, or use user-defined networks where Docker can proxy DNS.
If the cause is iptables or firewall conflict
- Do not run
iptables -For equivalent while Docker is managing containers. If you use firewalld, ufw, or custom scripts that manipulate iptables, restart Docker after any firewall change so it can reinsert its rules into the filter and nat tables. - If the DOCKER chains are missing from
iptables -Loriptables -t nat -L, restart the Docker daemon. Withlive-restore: true, running containers survive the restart.
If the cause is conntrack exhaustion
- Increase
net.netfilter.nf_conntrack_maxusing sysctl:To persist, add the setting tosysctl -w net.netfilter.nf_conntrack_max=<value>/etc/sysctl.confor a file under/etc/sysctl.d/. - Reduce unnecessary short-lived connections from inside containers. Each new outbound connection through NAT consumes a conntrack entry.
- Review outbound NAT load. Heavy outbound traffic through the bridge MASQUERADE rule creates conntrack state even without published ports.
If the cause is bridge or veth failure
- A daemon restart recreates
docker0and reattaches routing rules. Iflive-restoreis enabled, this is safe for running containers, though expect a brief network interruption. - If veth pairs are orphaned from unclean container removals, manual interface cleanup with
ip link del <veth_name>or a daemon restart may be required to remove stale links.
If the cause is host routing failure
- Repair upstream connectivity on the host before debugging Docker. Containers rely on the host’s default gateway and physical interface.
Prevention
- Monitor conntrack utilization on any host with heavy container traffic. Alert when usage crosses 70% of the maximum.
- Treat iptables as a shared resource between Docker and host firewalls. After any firewall change, validate that Docker’s chains are still present.
- Prefer user-defined bridge networks over the default bridge. User-defined networks enable Docker’s embedded DNS server, which is more reliable than inheriting the host’s resolv.conf.
- Monitor Docker daemon latency, not just process existence. DNS is served by dockerd; rising
/_pinglatency predicts DNS stalls before they become outages. - Avoid relying on loopback resolvers on the host for containers on the default bridge. Those addresses are not reachable from inside a container network namespace.
How Netdata helps
- Correlates per-container network I/O with host-level network errors and drops to distinguish container bugs from bridge or veth issues.
- Tracks
nf_conntrack_countagainst the host maximum and alerts before silent drops begin. - Monitors Docker daemon
/_pinglatency to detect DNS stalls caused by daemon stress. - Surfaces per-container network error counters, including
rx_droppedandtx_dropped, to identify packet loss at the veth or bridge layer. - Charts container CPU throttling alongside network throughput to reveal if network processing is being starved by CFS bandwidth limits.





