Docker container cannot connect to another container

Connection timeouts or “connection refused” between containers on the same host point to a failure in one of four layers: the bridge network, Docker’s embedded DNS, iptables, or the application bind address. Inter-container networking relies on Linux bridges, veth pairs, iptables rules, and an embedded DNS resolver at 127.0.0.11. This guide isolates the faulty layer and fixes it.

What this means

Docker isolates each container in its own network namespace. Containers on the same custom bridge communicate through a Linux bridge and resolve each other by name via Docker’s embedded DNS. The default bridge network has no embedded DNS. If containers are on different networks, if the target application binds to 127.0.0.1, or if iptables rules were wiped by an external firewall reload, traffic stops. The symptom appears at the application layer, but the root cause can be Layer 2, Layer 3, or the application itself.

Common causes

CauseWhat it looks likeFirst thing to check
Containers on different networksConnection to IP works only on one network; name resolution failsdocker inspect for NetworkSettings.Networks on each container
Application bound to 127.0.0.1Connection refused to the container IP even though the service is upss -tlnp or /proc/net/tcp inside the target container
Default bridge network in useName resolution fails entirely; only IP addresses workWhether the containers are on a user-defined network or bridge
Firewall or iptables conflictIntermittent drops after firewall-cmd reload or nft changesiptables -t filter -L FORWARD -n -v and iptables -t nat -L DOCKER -n -v
Embedded DNS failureSlow or failed lookups on custom networks; external DNS worksdocker exec nslookup timing against 127.0.0.11
userland-proxy disabledCross-bridge communication breaks when published ports are involveddaemon.json for "userland-proxy": false
Overlay cross-node attachable bugStandalone containers on different Swarm nodes cannot ping each other despite correct IPsWhether the workload uses standalone containers or Swarm services

Quick checks

# List networks attached to the source container
docker inspect -f '{{range $k, $v := .NetworkSettings.Networks}}{{$k}} {{end}}' <source_container>

# List containers on the target network
docker network inspect <network_name> --format '{{range .Containers}}{{.Name}} {{end}}'

# Test name resolution from inside the source container
docker exec <source_container> nslookup <target_name>

# Test TCP connectivity to the target container IP and port
docker exec <source_container> nc -zv <target_ip> <port>

# Check if the target application is listening on all interfaces
docker exec <target_container> ss -tlnp

# Check published ports (inter-container traffic does not require -p)
docker port <target_container>

# Check Docker's iptables rules
iptables -t filter -L FORWARD -n -v | grep -i docker
iptables -t nat -L DOCKER -n -v

# Check DNS configuration inside the source container
docker exec <source_container> cat /etc/resolv.conf

# Check for recent daemon network errors
journalctl -u docker.service --since "10 min ago" | grep -i "network\|error"

# Verify conntrack table utilization
echo $(( $(cat /proc/sys/net/netfilter/nf_conntrack_count) * 100 / $(cat /proc/sys/net/netfilter/nf_conntrack_max) ))%

If the target container lacks ss, nc, or nslookup, attach a debugging container to its network namespace:

docker run --rm --network container:<source_container> nicolaka/netshoot ss -tlnp
docker run --rm --network container:<source_container> nicolaka/netshoot nc -zv <target_ip> <port>

How to diagnose it

  1. Verify network attachment. Inspect both containers and compare their NetworkSettings.Networks keys. If the target network is missing from the source container, they cannot communicate at the bridge level. Attach the source with docker network connect <network> <source_container> or redeploy both on the same network.

  2. Verify name resolution. On a user-defined network, run docker exec <source> nslookup <target_name>. If this fails but ping <target_ip> succeeds, the embedded DNS is the problem. On the default bridge network, name resolution is expected to fail. Do not use the default bridge for multi-container name-based discovery.

  3. Verify IP-level reachability. Ping the target container’s IP from the source. If ping fails and both containers are on the same network, check the host bridge interface state (ip link show; the default bridge is docker0, custom bridges use br-<short-network-id>) and verify iptables rules. A firewalld reload or nft flush ruleset can wipe Docker’s rules without warning.

  4. Verify the target application bind address. This is the most common cause of “connection refused.” Inside the target container, check ss -tlnp or /proc/net/tcp for listening sockets. If the local address column shows 0100007F (127.0.0.1 in hex) instead of 00000000 (0.0.0.0), the application is not accepting external connections. Reconfigure it to bind to 0.0.0.0.

  5. Distinguish published ports from container-to-container ports. Port publishing with -p is only for traffic originating outside Docker. Two containers on the same network reach each other directly on the container port. Do not use published host ports for internal communication.

  6. Check daemon health. If DNS is slow or failing across multiple containers, check whether dockerd is under resource pressure. Embedded DNS resolution degrades when the daemon is memory-starved or leaking goroutines. Time the API with time curl --unix-socket /var/run/docker.sock http://localhost/_ping.

  7. Check for userland-proxy misconfiguration. If daemon.json contains "userland-proxy": false, inter-container connectivity across bridge networks can break when published ports are involved. If you see this pattern, re-enable the proxy or redesign the network layout.

  8. Check for overlay network limitations. If you are using attachable overlay networks across Swarm nodes with standalone containers (not Swarm services), a known bug prevents packets from entering the container namespace even when ARP and DNS resolve correctly. Use Swarm services for cross-node overlay communication.

Metrics and signals to monitor

SignalWhy it mattersWarning sign
Container network errors (rx_dropped, tx_dropped)Indicates veth pair issues, bridge saturation, or iptables dropsAny nonzero sustained rate
Container network I/O throughputSudden drops to near zero on an active service indicate a partition or blackholetx_bytes or rx_bytes flatlined during expected traffic
Host conntrack utilizationWhen conntrack fills, new connections are silently droppednf_conntrack_count >70% of nf_conntrack_max
Docker daemon API latencyEmbedded DNS is part of dockerd; slowdowns indicate daemon stress/_ping latency >1s or growing error log volume
Docker daemon error logsNetwork setup failures and iptables corruption appear hereSustained increase in error rate
Container restart countA dependency that cannot connect may crash and restart repeatedlyRestart count increasing faster than once per hour

Fixes

If containers are on different networks

Use docker network connect <network> <container> to attach a running container to an additional network. For new deployments, place both containers on the same custom bridge network created with docker network create. Docker Compose does this automatically.

If the application binds to 127.0.0.1

Reconfigure the application inside the container to bind to 0.0.0.0 or to the specific container interface IP. Many Node.js, Python, and Java frameworks default to localhost only. This change must be made in the application configuration or startup flags inside the image.

If DNS resolution fails on a custom network

Restart the affected containers. This reinitializes the embedded DNS client and flushes any stale cache. This is disruptive: it kills the running process, so drain traffic first. If DNS is consistently slow, check Docker daemon resource usage. Ensure the image does not ship a custom /etc/resolv.conf that overrides Docker’s nameserver 127.0.0.11.

If the default bridge network is in use

Migrate to a user-defined bridge network. The default docker0 bridge does not support embedded DNS by container name. Legacy --link is deprecated and does not provide the same DNS behavior as custom networks. Do not rely on it for modern deployments.

If firewall or iptables rules are corrupted

Reloading firewalld or running nft flush ruleset on RHEL, Fedora, or Debian 10+ can remove Docker’s rules without restoring them. Restart Docker to recreate the rules:

systemctl restart docker

Warning: This restarts the daemon and interrupts all container operations. Schedule during a maintenance window.

For prevention, either manage iptables manually with "iptables": false in daemon.json, or ensure Docker’s interfaces are in a preserved firewall zone.

If userland-proxy is disabled

With "userland-proxy": false, Docker relies on iptables DNAT instead of the proxy process. If inter-container communication across bridge networks breaks when published ports are involved, remove the setting to restore the default true value, or avoid routing internal traffic through published ports.

If overlay cross-node communication fails

For standalone containers on attachable overlay networks across Swarm nodes, convert the workload to Swarm services. A known bug prevents standalone containers from communicating across nodes even when VXLAN is correctly established.

Prevention

  • Use custom bridge networks. Never rely on the default bridge for inter-container name resolution or for multi-container applications.
  • Bind applications to 0.0.0.0. Audit application startup configurations to ensure they accept connections from outside the container’s loopback interface.
  • Configure log rotation. Unbounded container logs contribute to disk pressure that can destabilize the daemon and its embedded DNS. Set max-size and max-file log options.
  • Protect iptables rules. If you use firewalld or nftables, ensure Docker’s chains are not flushed during routine reloads.
  • Monitor conntrack saturation. Increase net.netfilter.nf_conntrack_max on busy hosts and alert when utilization exceeds 70%.
  • Set health checks that test dependencies. A health check that probes an upstream database or API endpoint will catch connectivity failures before orchestrators declare the container healthy.

How Netdata helps

  • Correlates container network error counters (rx_dropped, tx_dropped) with application error rates to isolate bridge or veth pair failures.
  • Tracks Docker daemon API latency and flags daemon stress that degrades embedded DNS resolution.
  • Alerts on conntrack table utilization before silent connection drops begin.
  • Monitors container restart counts and exit codes to identify crash loops caused by unreachable dependencies.
  • Surfaces per-container network I/O drops that indicate a network partition or misconfiguration.