Docker published port not reachable: troubleshooting -p and EXPOSE

You mapped a port with -p 8080:80, but curl against the host IP returns connection refused. docker ps shows the mapping, the container is running, and the port still appears closed.

A published port depends on three layers: a runtime mapping rule (-p), a host forwarding path (iptables DNAT and FORWARD policy), and an application listener inside the container bound to an interface that receives the forwarded packet. EXPOSE in a Dockerfile is metadata. It does not publish ports, create firewall rules, or set bind addresses.

Distinguish a missing -p mapping from an application binding to 127.0.0.1 inside the container, or from a host firewall silently dropping forwarded packets.

What this means

Docker publishes ports by adding iptables DNAT rules in the nat table’s DOCKER chain. An incoming packet to the host port is rewritten to the container’s bridge IP and port, then traverses the host FORWARD chain, crosses the bridge (docker0 on the default network), and enters the container through its veth pair.

If userland-proxy is enabled, a docker-proxy process binds the host port and forwards into the container to handle hairpin NAT and some IPv6 scenarios. When iptables rules are intact, the proxy is often not in the data path for external connections; if rules are missing or bypassed, behavior changes.

EXPOSE in a Dockerfile tells docker run -P which ports to publish dynamically. Without -p or -P at runtime, EXPOSE creates no iptables rules and the port is unreachable from outside the container.

Even with -p, the application must listen on an address that can receive the forwarded packet. If it binds to 127.0.0.1 inside its network namespace, the DNAT rule delivers the packet to eth0, but the kernel routes it to loopback. Because the listener is only on loopback, the connection is refused. The application must bind to 0.0.0.0 or the specific container IP.

Host firewalls that override Docker’s iptables rules, port conflicts where another process owns the host port, and conntrack exhaustion that silently drops NAT connections are also common breaks.

Common causes

CauseWhat it looks likeFirst thing to check
Application binds to 127.0.0.1 inside containerdocker ps shows the mapping, but every connection is refuseddocker exec <id> ss -tlnp
Missing or misconfigured -p mappingNo host port appears in docker psdocker ps --format '{{.Ports}}'
Host firewall blocks forwarded packetsMapping and app listener look correct, but connections time out or are resetsudo iptables -L FORWARD -n and sysctl net.ipv4.ip_forward
Docker iptables rules missingMapping exists, local host curl works, remote access failssudo iptables -t nat -L DOCKER -n -v
Port conflict on hostContainer may fail to start, or mapping is absent`sudo ss -tlnp
conntrack table exhaustionIntermittent connection hangs; works after idle period/proc/sys/net/netfilter/nf_conntrack_count vs nf_conntrack_max
Application has not finished startingPort mapped and reachable, but connection refused until init completesdocker exec <id> curl -s localhost:<port>

Quick checks

# Verify the port mapping exists
docker ps --format 'table {{.Names}}\t{{.Ports}}' | grep <container_name>

# Show precise host-to-container mapping
docker container port <container_name>

# Check application listener inside container (install iproute2 if ss is missing)
docker exec <container_id> ss -tlnp

# Test responsiveness from inside the container
docker exec <container_id> sh -c 'curl -s -o /dev/null -w "%{http_code}" localhost:<container_port>'

# List Docker iptables DNAT rules
sudo iptables -t nat -L DOCKER -n -v

# Confirm no other host process owns the port
sudo ss -tlnp | grep <host_port>

# Verify kernel IP forwarding
sysctl net.ipv4.ip_forward

# Check conntrack utilization
echo "$(cat /proc/sys/net/netfilter/nf_conntrack_count) / $(cat /proc/sys/net/netfilter/nf_conntrack_max)"

# Check Docker daemon logs for port or network errors
sudo journalctl -u docker.service --since "10 minutes ago" | grep -iE "error|fail|port"

How to diagnose it

  1. Confirm the mapping. docker ps should show 0.0.0.0:8080->80/tcp in the PORTS column. An empty column means the container started without -p or with incorrect syntax. EXPOSE alone does not populate this column.

  2. Verify the application listener. Run docker exec <id> ss -tlnp. If the container port is bound to 127.0.0.1, the application will refuse external connections. Reconfigure it to bind to 0.0.0.0 or ::.

  3. Test from inside the container. Run curl or nc to localhost:<container_port> from inside the container. If this fails, the application is not ready, is listening on a different port, or has crashed. Fix the application before investigating Docker networking.

  4. Test from the Docker host. Run curl localhost:<host_port> on the host. If it works locally but fails remotely, the DNAT rule is present but the packet is blocked after NAT, or the bind address is restricted. If it fails locally, the DNAT rule is missing or the application is not responding.

  5. Inspect Docker NAT rules. Run sudo iptables -t nat -L DOCKER -n -v. Look for a rule matching your host port that redirects to the container’s bridge IP. A missing rule means Docker’s network setup failed, which can happen after a firewall manager flushes iptables or after an unclean daemon restart.

  6. Check host forwarding and firewall. Run sudo iptables -L FORWARD -n and verify the policy is not DROP. Check sysctl net.ipv4.ip_forward returns 1. If the host firewall manager (firewalld, ufw, or a custom script) overwrote Docker’s rules, forwarding fails even though the NAT rule exists.

  7. Check for port conflicts. Run sudo ss -tlnp | grep <host_port>. If a process other than dockerd or docker-proxy owns the port, Docker cannot bind it. The container may still start, but the mapping will be absent or broken.

  8. Check conntrack utilization. If connections hang or fail silently after working initially, compare /proc/sys/net/netfilter/nf_conntrack_count to nf_conntrack_max. If the ratio exceeds 80%, the kernel drops new NAT connections. Increase nf_conntrack_max or reduce connection churn.

  9. Check daemon logs. Run sudo journalctl -u docker.service and look for errors about port allocation, network setup, or plugin failures. A daemon under heavy load during container start may fail to complete network setup before a timeout.

Metrics and signals to monitor

SignalWhy it mattersWarning sign
veth interface errors (rx_errors, tx_errors)Indicates bridge-level or veth packet lossSustained nonzero rate
conntrack count vs maxNAT table exhaustion causes silent connection dropsUtilization >70% of max
Docker daemon /_ping latencyA hung daemon cannot maintain network stateLatency >1 second sustained
Container health check statusDistinguishes network failure from application failureUnhealthy status while port is reachable
iptables rule count in nat/DOCKERRules can be flushed by external firewall managersRule count drops unexpectedly
veth drops (tx_dropped, rx_dropped)May indicate conntrack or bridge saturationCounter increasing
Host FORWARD policyA DROP policy blocks published-port trafficPolicy is DROP
docker0 operstateA down bridge breaks default-network connectivityOperstate is not up

Fixes

If the application binds to 127.0.0.1 inside the container

Reconfigure the application to bind to 0.0.0.0 or ::. For example, change a Node.js application from server.listen(3000, 'localhost') to server.listen(3000) or server.listen(3000, '0.0.0.0'). Restart the container after fixing the bind address.

If the -p mapping is missing

Stop and recreate the container with the correct -p flag. EXPOSE does not publish ports. Use -P (uppercase) to publish all EXPOSEd ports dynamically to ephemeral host ports, or specify explicit mappings such as -p 127.0.0.1:8080:80/tcp or -p 8080:80. In Docker Compose, verify the ports: section is present under the service.

If the host firewall blocks forwarding

Warning: these commands change firewall state.

If iptables -L FORWARD shows DROP, temporarily allow forwarding:

sudo iptables -P FORWARD ACCEPT
sudo ip6tables -P FORWARD ACCEPT

Also verify IP forwarding:

sudo sysctl -w net.ipv4.ip_forward=1

For a permanent fix, configure your firewall manager to preserve Docker rules or set the policy in its configuration. Docker expects to manage the DOCKER and DOCKER-USER chains.

If Docker iptables rules are missing

If iptables -t nat -L DOCKER is empty or does not contain your port, Docker’s network setup failed. This often happens after another tool flushes iptables. Check if a firewall manager is active (systemctl status firewalld or ufw status) and integrate Docker with it rather than disabling it. A daemon restart recreates the rules, but is disruptive. If live-restore is enabled, running containers survive the restart. Otherwise, schedule a maintenance window.

If the host port is already in use

Choose a different host port or stop the conflicting service. Verify with ss -tlnp.

If conntrack is exhausted

Increase the table size:

sudo sysctl -w net.netfilter.nf_conntrack_max=262144

Then monitor connection rates. If you consistently fill 262144 entries, reduce health check frequency or connection pooling overhead, or redesign the network layout to reduce NAT usage.

Prevention

  • Validate runtime mappings in CI. A container started without -p will pass local health checks but reject remote traffic.
  • Bind applications to 0.0.0.0 by default. Loopback-only listeners pass container-internal tests and fail host-level reachability checks.
  • Protect Docker’s iptables rules. External firewall managers that flush tables silently remove the DNAT rules required for port forwarding.
  • Monitor conntrack and network error counters. Early warning on conntrack utilization prevents silent connection drops during traffic spikes.
  • Use explicit bind addresses. Restricting a port to 127.0.0.1 on the host prevents accidental external exposure, while omitting the bind address may expose services unintendedly.

How Netdata helps

Netdata correlates container network drops and errors with connection-refused events to distinguish bridge problems from application failures. It tracks conntrack utilization against nf_conntrack_max and alerts before silent drops begin. Docker daemon API latency is monitored to detect stalls that prevent port setup. Container health check status is shown alongside network metrics to verify whether the port is reachable but the application is rejecting requests.