Docker published port not reachable: troubleshooting -p and EXPOSE
You mapped a port with -p 8080:80, but curl against the host IP returns connection refused. docker ps shows the mapping, the container is running, and the port still appears closed.
A published port depends on three layers: a runtime mapping rule (-p), a host forwarding path (iptables DNAT and FORWARD policy), and an application listener inside the container bound to an interface that receives the forwarded packet. EXPOSE in a Dockerfile is metadata. It does not publish ports, create firewall rules, or set bind addresses.
Distinguish a missing -p mapping from an application binding to 127.0.0.1 inside the container, or from a host firewall silently dropping forwarded packets.
What this means
Docker publishes ports by adding iptables DNAT rules in the nat table’s DOCKER chain. An incoming packet to the host port is rewritten to the container’s bridge IP and port, then traverses the host FORWARD chain, crosses the bridge (docker0 on the default network), and enters the container through its veth pair.
If userland-proxy is enabled, a docker-proxy process binds the host port and forwards into the container to handle hairpin NAT and some IPv6 scenarios. When iptables rules are intact, the proxy is often not in the data path for external connections; if rules are missing or bypassed, behavior changes.
EXPOSE in a Dockerfile tells docker run -P which ports to publish dynamically. Without -p or -P at runtime, EXPOSE creates no iptables rules and the port is unreachable from outside the container.
Even with -p, the application must listen on an address that can receive the forwarded packet. If it binds to 127.0.0.1 inside its network namespace, the DNAT rule delivers the packet to eth0, but the kernel routes it to loopback. Because the listener is only on loopback, the connection is refused. The application must bind to 0.0.0.0 or the specific container IP.
Host firewalls that override Docker’s iptables rules, port conflicts where another process owns the host port, and conntrack exhaustion that silently drops NAT connections are also common breaks.
Common causes
| Cause | What it looks like | First thing to check |
|---|---|---|
| Application binds to 127.0.0.1 inside container | docker ps shows the mapping, but every connection is refused | docker exec <id> ss -tlnp |
Missing or misconfigured -p mapping | No host port appears in docker ps | docker ps --format '{{.Ports}}' |
| Host firewall blocks forwarded packets | Mapping and app listener look correct, but connections time out or are reset | sudo iptables -L FORWARD -n and sysctl net.ipv4.ip_forward |
| Docker iptables rules missing | Mapping exists, local host curl works, remote access fails | sudo iptables -t nat -L DOCKER -n -v |
| Port conflict on host | Container may fail to start, or mapping is absent | `sudo ss -tlnp |
| conntrack table exhaustion | Intermittent connection hangs; works after idle period | /proc/sys/net/netfilter/nf_conntrack_count vs nf_conntrack_max |
| Application has not finished starting | Port mapped and reachable, but connection refused until init completes | docker exec <id> curl -s localhost:<port> |
Quick checks
# Verify the port mapping exists
docker ps --format 'table {{.Names}}\t{{.Ports}}' | grep <container_name>
# Show precise host-to-container mapping
docker container port <container_name>
# Check application listener inside container (install iproute2 if ss is missing)
docker exec <container_id> ss -tlnp
# Test responsiveness from inside the container
docker exec <container_id> sh -c 'curl -s -o /dev/null -w "%{http_code}" localhost:<container_port>'
# List Docker iptables DNAT rules
sudo iptables -t nat -L DOCKER -n -v
# Confirm no other host process owns the port
sudo ss -tlnp | grep <host_port>
# Verify kernel IP forwarding
sysctl net.ipv4.ip_forward
# Check conntrack utilization
echo "$(cat /proc/sys/net/netfilter/nf_conntrack_count) / $(cat /proc/sys/net/netfilter/nf_conntrack_max)"
# Check Docker daemon logs for port or network errors
sudo journalctl -u docker.service --since "10 minutes ago" | grep -iE "error|fail|port"
How to diagnose it
Confirm the mapping.
docker psshould show0.0.0.0:8080->80/tcpin the PORTS column. An empty column means the container started without-por with incorrect syntax. EXPOSE alone does not populate this column.Verify the application listener. Run
docker exec <id> ss -tlnp. If the container port is bound to127.0.0.1, the application will refuse external connections. Reconfigure it to bind to0.0.0.0or::.Test from inside the container. Run
curlornctolocalhost:<container_port>from inside the container. If this fails, the application is not ready, is listening on a different port, or has crashed. Fix the application before investigating Docker networking.Test from the Docker host. Run
curl localhost:<host_port>on the host. If it works locally but fails remotely, the DNAT rule is present but the packet is blocked after NAT, or the bind address is restricted. If it fails locally, the DNAT rule is missing or the application is not responding.Inspect Docker NAT rules. Run
sudo iptables -t nat -L DOCKER -n -v. Look for a rule matching your host port that redirects to the container’s bridge IP. A missing rule means Docker’s network setup failed, which can happen after a firewall manager flushes iptables or after an unclean daemon restart.Check host forwarding and firewall. Run
sudo iptables -L FORWARD -nand verify the policy is notDROP. Checksysctl net.ipv4.ip_forwardreturns1. If the host firewall manager (firewalld, ufw, or a custom script) overwrote Docker’s rules, forwarding fails even though the NAT rule exists.Check for port conflicts. Run
sudo ss -tlnp | grep <host_port>. If a process other thandockerdordocker-proxyowns the port, Docker cannot bind it. The container may still start, but the mapping will be absent or broken.Check conntrack utilization. If connections hang or fail silently after working initially, compare
/proc/sys/net/netfilter/nf_conntrack_counttonf_conntrack_max. If the ratio exceeds 80%, the kernel drops new NAT connections. Increasenf_conntrack_maxor reduce connection churn.Check daemon logs. Run
sudo journalctl -u docker.serviceand look for errors about port allocation, network setup, or plugin failures. A daemon under heavy load during container start may fail to complete network setup before a timeout.
Metrics and signals to monitor
| Signal | Why it matters | Warning sign |
|---|---|---|
veth interface errors (rx_errors, tx_errors) | Indicates bridge-level or veth packet loss | Sustained nonzero rate |
| conntrack count vs max | NAT table exhaustion causes silent connection drops | Utilization >70% of max |
Docker daemon /_ping latency | A hung daemon cannot maintain network state | Latency >1 second sustained |
| Container health check status | Distinguishes network failure from application failure | Unhealthy status while port is reachable |
iptables rule count in nat/DOCKER | Rules can be flushed by external firewall managers | Rule count drops unexpectedly |
veth drops (tx_dropped, rx_dropped) | May indicate conntrack or bridge saturation | Counter increasing |
Host FORWARD policy | A DROP policy blocks published-port traffic | Policy is DROP |
docker0 operstate | A down bridge breaks default-network connectivity | Operstate is not up |
Fixes
If the application binds to 127.0.0.1 inside the container
Reconfigure the application to bind to 0.0.0.0 or ::. For example, change a Node.js application from server.listen(3000, 'localhost') to server.listen(3000) or server.listen(3000, '0.0.0.0'). Restart the container after fixing the bind address.
If the -p mapping is missing
Stop and recreate the container with the correct -p flag. EXPOSE does not publish ports. Use -P (uppercase) to publish all EXPOSEd ports dynamically to ephemeral host ports, or specify explicit mappings such as -p 127.0.0.1:8080:80/tcp or -p 8080:80. In Docker Compose, verify the ports: section is present under the service.
If the host firewall blocks forwarding
Warning: these commands change firewall state.
If iptables -L FORWARD shows DROP, temporarily allow forwarding:
sudo iptables -P FORWARD ACCEPT
sudo ip6tables -P FORWARD ACCEPT
Also verify IP forwarding:
sudo sysctl -w net.ipv4.ip_forward=1
For a permanent fix, configure your firewall manager to preserve Docker rules or set the policy in its configuration. Docker expects to manage the DOCKER and DOCKER-USER chains.
If Docker iptables rules are missing
If iptables -t nat -L DOCKER is empty or does not contain your port, Docker’s network setup failed. This often happens after another tool flushes iptables. Check if a firewall manager is active (systemctl status firewalld or ufw status) and integrate Docker with it rather than disabling it. A daemon restart recreates the rules, but is disruptive. If live-restore is enabled, running containers survive the restart. Otherwise, schedule a maintenance window.
If the host port is already in use
Choose a different host port or stop the conflicting service. Verify with ss -tlnp.
If conntrack is exhausted
Increase the table size:
sudo sysctl -w net.netfilter.nf_conntrack_max=262144
Then monitor connection rates. If you consistently fill 262144 entries, reduce health check frequency or connection pooling overhead, or redesign the network layout to reduce NAT usage.
Prevention
- Validate runtime mappings in CI. A container started without
-pwill pass local health checks but reject remote traffic. - Bind applications to
0.0.0.0by default. Loopback-only listeners pass container-internal tests and fail host-level reachability checks. - Protect Docker’s iptables rules. External firewall managers that flush tables silently remove the DNAT rules required for port forwarding.
- Monitor conntrack and network error counters. Early warning on conntrack utilization prevents silent connection drops during traffic spikes.
- Use explicit bind addresses. Restricting a port to
127.0.0.1on the host prevents accidental external exposure, while omitting the bind address may expose services unintendedly.
How Netdata helps
Netdata correlates container network drops and errors with connection-refused events to distinguish bridge problems from application failures. It tracks conntrack utilization against nf_conntrack_max and alerts before silent drops begin. Docker daemon API latency is monitored to detect stalls that prevent port setup. Container health check status is shown alongside network metrics to verify whether the port is reachable but the application is rejecting requests.
Related guides
- If
docker psor other commands hang while diagnosing ports, see Docker commands hang: docker ps, inspect, and exec freezes. - If the container exits before you can test the port, see Docker container exits immediately: how to diagnose it.
- If the container is running but fails health checks, see Docker container running but unhealthy: how to diagnose health check failures.
- If DNS resolution inside the container is the real problem, see Docker DNS not working inside containers.
- For daemon-wide issues that prevent network setup, see Docker daemon not responding: how to troubleshoot a hung dockerd.





