Docker container cannot connect to another container
Connection timeouts or “connection refused” between containers on the same host point to a failure in one of four layers: the bridge network, Docker’s embedded DNS, iptables, or the application bind address. Inter-container networking relies on Linux bridges, veth pairs, iptables rules, and an embedded DNS resolver at 127.0.0.11. This guide isolates the faulty layer and fixes it.
What this means
Docker isolates each container in its own network namespace. Containers on the same custom bridge communicate through a Linux bridge and resolve each other by name via Docker’s embedded DNS. The default bridge network has no embedded DNS. If containers are on different networks, if the target application binds to 127.0.0.1, or if iptables rules were wiped by an external firewall reload, traffic stops. The symptom appears at the application layer, but the root cause can be Layer 2, Layer 3, or the application itself.
Common causes
| Cause | What it looks like | First thing to check |
|---|---|---|
| Containers on different networks | Connection to IP works only on one network; name resolution fails | docker inspect for NetworkSettings.Networks on each container |
| Application bound to 127.0.0.1 | Connection refused to the container IP even though the service is up | ss -tlnp or /proc/net/tcp inside the target container |
| Default bridge network in use | Name resolution fails entirely; only IP addresses work | Whether the containers are on a user-defined network or bridge |
| Firewall or iptables conflict | Intermittent drops after firewall-cmd reload or nft changes | iptables -t filter -L FORWARD -n -v and iptables -t nat -L DOCKER -n -v |
| Embedded DNS failure | Slow or failed lookups on custom networks; external DNS works | docker exec nslookup timing against 127.0.0.11 |
| userland-proxy disabled | Cross-bridge communication breaks when published ports are involved | daemon.json for "userland-proxy": false |
| Overlay cross-node attachable bug | Standalone containers on different Swarm nodes cannot ping each other despite correct IPs | Whether the workload uses standalone containers or Swarm services |
Quick checks
# List networks attached to the source container
docker inspect -f '{{range $k, $v := .NetworkSettings.Networks}}{{$k}} {{end}}' <source_container>
# List containers on the target network
docker network inspect <network_name> --format '{{range .Containers}}{{.Name}} {{end}}'
# Test name resolution from inside the source container
docker exec <source_container> nslookup <target_name>
# Test TCP connectivity to the target container IP and port
docker exec <source_container> nc -zv <target_ip> <port>
# Check if the target application is listening on all interfaces
docker exec <target_container> ss -tlnp
# Check published ports (inter-container traffic does not require -p)
docker port <target_container>
# Check Docker's iptables rules
iptables -t filter -L FORWARD -n -v | grep -i docker
iptables -t nat -L DOCKER -n -v
# Check DNS configuration inside the source container
docker exec <source_container> cat /etc/resolv.conf
# Check for recent daemon network errors
journalctl -u docker.service --since "10 min ago" | grep -i "network\|error"
# Verify conntrack table utilization
echo $(( $(cat /proc/sys/net/netfilter/nf_conntrack_count) * 100 / $(cat /proc/sys/net/netfilter/nf_conntrack_max) ))%
If the target container lacks ss, nc, or nslookup, attach a debugging container to its network namespace:
docker run --rm --network container:<source_container> nicolaka/netshoot ss -tlnp
docker run --rm --network container:<source_container> nicolaka/netshoot nc -zv <target_ip> <port>
How to diagnose it
Verify network attachment. Inspect both containers and compare their
NetworkSettings.Networkskeys. If the target network is missing from the source container, they cannot communicate at the bridge level. Attach the source withdocker network connect <network> <source_container>or redeploy both on the same network.Verify name resolution. On a user-defined network, run
docker exec <source> nslookup <target_name>. If this fails butping <target_ip>succeeds, the embedded DNS is the problem. On the defaultbridgenetwork, name resolution is expected to fail. Do not use the default bridge for multi-container name-based discovery.Verify IP-level reachability. Ping the target container’s IP from the source. If ping fails and both containers are on the same network, check the host bridge interface state (
ip link show; the default bridge isdocker0, custom bridges usebr-<short-network-id>) and verify iptables rules. Afirewalldreload ornft flush rulesetcan wipe Docker’s rules without warning.Verify the target application bind address. This is the most common cause of “connection refused.” Inside the target container, check
ss -tlnpor/proc/net/tcpfor listening sockets. If the local address column shows0100007F(127.0.0.1 in hex) instead of00000000(0.0.0.0), the application is not accepting external connections. Reconfigure it to bind to0.0.0.0.Distinguish published ports from container-to-container ports. Port publishing with
-pis only for traffic originating outside Docker. Two containers on the same network reach each other directly on the container port. Do not use published host ports for internal communication.Check daemon health. If DNS is slow or failing across multiple containers, check whether
dockerdis under resource pressure. Embedded DNS resolution degrades when the daemon is memory-starved or leaking goroutines. Time the API withtime curl --unix-socket /var/run/docker.sock http://localhost/_ping.Check for userland-proxy misconfiguration. If
daemon.jsoncontains"userland-proxy": false, inter-container connectivity across bridge networks can break when published ports are involved. If you see this pattern, re-enable the proxy or redesign the network layout.Check for overlay network limitations. If you are using attachable overlay networks across Swarm nodes with standalone containers (not Swarm services), a known bug prevents packets from entering the container namespace even when ARP and DNS resolve correctly. Use Swarm services for cross-node overlay communication.
Metrics and signals to monitor
| Signal | Why it matters | Warning sign |
|---|---|---|
| Container network errors (rx_dropped, tx_dropped) | Indicates veth pair issues, bridge saturation, or iptables drops | Any nonzero sustained rate |
| Container network I/O throughput | Sudden drops to near zero on an active service indicate a partition or blackhole | tx_bytes or rx_bytes flatlined during expected traffic |
| Host conntrack utilization | When conntrack fills, new connections are silently dropped | nf_conntrack_count >70% of nf_conntrack_max |
| Docker daemon API latency | Embedded DNS is part of dockerd; slowdowns indicate daemon stress | /_ping latency >1s or growing error log volume |
| Docker daemon error logs | Network setup failures and iptables corruption appear here | Sustained increase in error rate |
| Container restart count | A dependency that cannot connect may crash and restart repeatedly | Restart count increasing faster than once per hour |
Fixes
If containers are on different networks
Use docker network connect <network> <container> to attach a running container to an additional network. For new deployments, place both containers on the same custom bridge network created with docker network create. Docker Compose does this automatically.
If the application binds to 127.0.0.1
Reconfigure the application inside the container to bind to 0.0.0.0 or to the specific container interface IP. Many Node.js, Python, and Java frameworks default to localhost only. This change must be made in the application configuration or startup flags inside the image.
If DNS resolution fails on a custom network
Restart the affected containers. This reinitializes the embedded DNS client and flushes any stale cache. This is disruptive: it kills the running process, so drain traffic first. If DNS is consistently slow, check Docker daemon resource usage. Ensure the image does not ship a custom /etc/resolv.conf that overrides Docker’s nameserver 127.0.0.11.
If the default bridge network is in use
Migrate to a user-defined bridge network. The default docker0 bridge does not support embedded DNS by container name. Legacy --link is deprecated and does not provide the same DNS behavior as custom networks. Do not rely on it for modern deployments.
If firewall or iptables rules are corrupted
Reloading firewalld or running nft flush ruleset on RHEL, Fedora, or Debian 10+ can remove Docker’s rules without restoring them. Restart Docker to recreate the rules:
systemctl restart docker
Warning: This restarts the daemon and interrupts all container operations. Schedule during a maintenance window.
For prevention, either manage iptables manually with "iptables": false in daemon.json, or ensure Docker’s interfaces are in a preserved firewall zone.
If userland-proxy is disabled
With "userland-proxy": false, Docker relies on iptables DNAT instead of the proxy process. If inter-container communication across bridge networks breaks when published ports are involved, remove the setting to restore the default true value, or avoid routing internal traffic through published ports.
If overlay cross-node communication fails
For standalone containers on attachable overlay networks across Swarm nodes, convert the workload to Swarm services. A known bug prevents standalone containers from communicating across nodes even when VXLAN is correctly established.
Prevention
- Use custom bridge networks. Never rely on the default bridge for inter-container name resolution or for multi-container applications.
- Bind applications to 0.0.0.0. Audit application startup configurations to ensure they accept connections from outside the container’s loopback interface.
- Configure log rotation. Unbounded container logs contribute to disk pressure that can destabilize the daemon and its embedded DNS. Set
max-sizeandmax-filelog options. - Protect iptables rules. If you use
firewalldornftables, ensure Docker’s chains are not flushed during routine reloads. - Monitor conntrack saturation. Increase
net.netfilter.nf_conntrack_maxon busy hosts and alert when utilization exceeds 70%. - Set health checks that test dependencies. A health check that probes an upstream database or API endpoint will catch connectivity failures before orchestrators declare the container healthy.
How Netdata helps
- Correlates container network error counters (rx_dropped, tx_dropped) with application error rates to isolate bridge or veth pair failures.
- Tracks Docker daemon API latency and flags daemon stress that degrades embedded DNS resolution.
- Alerts on conntrack table utilization before silent connection drops begin.
- Monitors container restart counts and exit codes to identify crash loops caused by unreachable dependencies.
- Surfaces per-container network I/O drops that indicate a network partition or misconfiguration.





