Docker port binding: address already in use
A docker run -p 8080:80 or docker compose up fails with bind: address already in use. The error is clear, but the owner is not. ss may show a system service. docker ps may show nothing, yet Docker still refuses. The port can appear free while an orphaned DNAT rule or a split firewall backend blocks the bind.
Distinguish a genuine socket conflict from an orphaned DNAT rule, a rootless Docker regression, and a WSL2 iptables/nftables split brain. Then reclaim the port and prevent recurrence.
What this means
When you publish a port with -p HOST_PORT:CONTAINER_PORT, Docker attempts to reserve the host port in two stages. First, it binds the host socket or starts a docker-proxy process to hold it. Second, it inserts a DNAT rule into the DOCKER iptables chain so traffic reaches the container. If either step fails because the port is already claimed, Docker returns an error.
The exact message tells you where the failure happened:
failed to bind host port 0.0.0.0:N/tcp: address already in usemeans the Linux socket bind failed. Another process or container genuinely holds the port.failed to start userland proxy for port mappingmeans the docker-proxy process could not bind the port.failed to set up container networking: driver failed programming external connectivity on endpoint ...: bind: address already in usemeans the socket bind succeeded, but Docker failed to program the iptables rule afterward. This often happens when an orphaned DNAT rule from a previous container still references the port, or when the firewall backend is inconsistent.
If you are running rootless Docker, the slirp4netns port driver in Engine 29.0.0 through 29.0.2 can return the same error even when no process holds the port.
Common causes
| Cause | What it looks like | First thing to check |
|---|---|---|
| Another host process bound the port | ss shows a non-Docker process listening on the host port | `sudo ss -tlnp |
| Another container already published the port | docker ps shows a container mapped to the same host IP and port | `docker ps -a –format ‘{{.Names}}\t{{.Ports}}’ |
| Orphaned iptables DNAT rule | ss shows nothing on the port, but Docker still refuses to bind | `sudo iptables -t nat -L DOCKER -n –line-numbers |
| Rootless Docker 29.0.x regression | No process and no iptables rule holds the port; you run rootless Docker 29.0.0 through 29.0.2 | docker version --format '{{.Server.Version}}' |
| WSL2 iptables/nftables split brain | Rules exist in nftables but not iptables-legacy, or vice versa | Compare iptables -S and nft list ruleset |
| Host service on a well-known port | systemd-resolved binds 127.0.0.53:53, blocking containers that need port 53 | systemctl status systemd-resolved |
Quick checks
# Check which process holds the host port
sudo ss -tlnp | grep -E ':8080\b'
This lists the PID and process name. If the owner is not docker-proxy or a containerd-shim, a host service owns the port.
# Check which containers already use the host port
docker ps -a --format '{{.ID}}\t{{.Names}}\t{{.Ports}}' | grep -E ':8080\b'
Lists running and stopped containers with that mapping. A stopped container can still hold the binding in some network configurations.
# List Docker DNAT rules for the port
sudo iptables -t nat -L DOCKER -n --line-numbers | grep -E '8080\b'
Reveals orphaned iptables rules that survive after a container is removed. A rule with no corresponding container means an orphan.
# Check nftables rules if using the experimental backend
sudo nft list ruleset | grep -E '8080\b'
Docker Engine 29 can use an experimental nftables backend. If you only check iptables, you may miss rules programmed into nftables.
# Inspect a container's exact port binding syntax
docker inspect <container> --format '{{json .HostConfig.PortBindings}}'
Verifies whether you bound to 0.0.0.0 or 127.0.0.1. An explicit interface binding changes the conflict surface.
# Alternative process check with lsof
sudo lsof -i :8080
Useful when ss output is ambiguous or when the socket state filters it out.
# Check Docker Engine version for rootless regression
docker version --format 'Server: {{.Server.Version}}'
Rootless Docker 29.0.0 through 29.0.2 has a confirmed slirp4netns port binding regression that mimics a conflict.
# Verify whether systemd-resolved holds port 53
systemctl is-active systemd-resolved
On many distributions, systemd-resolved binds 127.0.0.53:53 and blocks containers like Pi-hole from publishing that port.
How to diagnose it
Confirm the exact host port and interface. A binding to
127.0.0.1:8080does not conflict with0.0.0.0:8080in the same way that two0.0.0.0bindings do. Check your publish syntax withdocker inspect <container> --format '{{.HostConfig.PortBindings}}'.Find the current socket holder. Run
sudo ss -tlnp | grep -E ':<PORT>\b'andsudo lsof -i :<PORT>. If a process is listed, that process owns the port. A container using--network hostbinds directly to the host socket and appears here under its own process name, not as docker-proxy.Check for container conflicts. Run
docker ps -aand scan thePORTScolumn. A stopped container may still hold its port mapping depending on network state, and a running container onhostnetworking binds directly without appearing in the published-ports list.Inspect iptables for orphaned DNAT rules. If
ssandlsofreturn nothing, list the Docker NAT chain:sudo iptables -t nat -L DOCKER -n --line-numbers. Look for a DNAT rule referencing your port. If one exists and no corresponding container is running, the rule is orphaned. You can delete it by line number withsudo iptables -t nat -D DOCKER <N>, or restart dockerd to flush and reprogram all rules. Warning: deleting the wrong rule breaks NAT for running containers. If you are unsure of the line number, restart dockerd instead.Verify the firewall backend. On Docker Engine 29 with the experimental nftables backend, or on WSL2 where iptables-legacy and nftables diverge, Docker may program rules into a backend that does not match what your diagnostic tools show. Run both
iptables -Sandnft list rulesetand compare the outputs.Check for the rootless Docker 29.0.x regression. If you are running rootless Docker and the version is 29.0.0 through 29.0.2, the slirp4netns port driver may falsely report the port as in use.
Distinguish from hairpin NAT. If the container starts successfully but its own application cannot reach
localhost:PORT, you are looking at a hairpin NAT limitation on the bridge network, not a bind conflict.
Metrics and signals to monitor
| Signal | Why it matters | Warning sign |
|---|---|---|
Host listening sockets (ss -tlnp) | Reveals which processes hold ports before a conflict blocks a deployment | Expected port appears with unexpected PID or process name |
Container port mappings (docker ps) | Shows which containers have already claimed host ports | Two containers mapped to the same host IP:port pair |
| iptables DOCKER chain rule count | Orphaned rules accumulate after forced removals or daemon crashes | Rule count grows without corresponding container count |
| conntrack utilization | Every published port creates NAT connections; exhaustion causes silent drops | nf_conntrack_count approaching nf_conntrack_max |
| Docker daemon error logs | Error patterns expose recurring port bind or programming failures | Sustained address already in use or failed to start userland proxy messages |
| Container creation failures | Port conflicts manifest as create or start failures in automation | Creation failure rate above baseline for a service |
Fixes
If another host process holds the port
Stop or reconfigure the host service. If the service is required on that port, change the container’s host port to an unused one. For services that only need local access, bind to 127.0.0.1:<PORT>:<CONTAINER_PORT> instead of 0.0.0.0, which reduces collision surface and avoids LAN exposure.
If another container holds the port
Stop or remove the conflicting container. If you are using Docker Compose, run docker compose up -d --force-recreate after correcting the port mapping in the compose file. Do not rely on docker compose up alone to rebind ports if the previous container is still present.
If an orphaned iptables DNAT rule remains
List the rule with sudo iptables -t nat -L DOCKER -n --line-numbers, note the line number, and delete it with sudo iptables -t nat -D DOCKER <N>. If you are unsure which rules are safe to remove, restart dockerd. If live-restore is enabled, running containers survive the restart; otherwise, the restart disrupts all containers.
If the rootless Docker 29.0.x regression applies
If you are affected by this regression, monitor the upstream issue for a patch. There is no confirmed safe workaround at press time.
If WSL2 iptables or nftables split brain applies
Force iptables-legacy and restart dockerd:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo systemctl restart docker
This assumes your WSL2 distribution uses systemd. If it does not, restart the Docker daemon through your init system.
If a firewall reload wiped Docker chains
Avoid flushing the nat table or replacing the entire ruleset while Docker is running. Use the DOCKER-USER chain for custom rules. Docker preserves this chain across its own updates, whereas a full ruleset replacement will destroy the DOCKER and DOCKER-ISOLATION chains.
If systemd-resolved blocks port 53
Prefer mapping the container to a different host port. If you must use port 53, stopping and disabling systemd-resolved will break local DNS resolution on many distributions. Plan for an alternative resolver before doing so.
Prevention
- Pin explicit host ports. Do not rely on dynamic allocation for services that need a known port. Explicit mappings expose conflicts during code review.
- Restrict bindings to localhost. Use
127.0.0.1:<HOST_PORT>:<CONTAINER_PORT>whenever external LAN access is unnecessary. This reduces both the attack surface and the collision surface. - Validate compose files before deployment. Check for duplicate host ports across services. A duplicate mapping may not fail until the second service starts.
- Avoid firewall ruleset flushes. Program custom rules into
DOCKER-USERinstead of replacing the entire nat table. - Monitor iptables rule count and conntrack utilization. These resources have hard limits. Exhaustion causes silent connection drops, not just bind failures.
- Keep Docker Engine updated. Engine 29 contains networking fixes, though rootless operators should verify current bug trackers before upgrading.
How Netdata helps
- Docker daemon error logs. Alert on
address already in useandfailed to start userland proxyfrom dockerd. - Container creation failures. Spikes during rolling updates often signal port conflicts.
- Host socket metrics. Track listening ports and conntrack saturation to catch exhaustion before it blocks container starts.
- Docker events. Rapid restart loops can follow a failed port binding deployment.
Related guides
- Docker commands hang: docker ps, inspect, and exec freezes
- Docker container exits immediately: how to diagnose it
- Docker container high CPU usage: causes and fixes
- Docker container high memory usage: how to diagnose it
- Docker container keeps restarting: causes, checks, and fixes
- Docker container memory leak: how to find one and prove it
- Docker container running but unhealthy: how to diagnose health check failures
- Docker CPU throttling: the hidden cause of container latency
- Docker daemon not responding: how to troubleshoot a hung dockerd
- Docker disk space full: how to troubleshoot /var/lib/docker
- Docker DNS not working inside containers
- Docker exit code 1: application errors and how to find them





