Docker port binding: address already in use

A docker run -p 8080:80 or docker compose up fails with bind: address already in use. The error is clear, but the owner is not. ss may show a system service. docker ps may show nothing, yet Docker still refuses. The port can appear free while an orphaned DNAT rule or a split firewall backend blocks the bind.

Distinguish a genuine socket conflict from an orphaned DNAT rule, a rootless Docker regression, and a WSL2 iptables/nftables split brain. Then reclaim the port and prevent recurrence.

What this means

When you publish a port with -p HOST_PORT:CONTAINER_PORT, Docker attempts to reserve the host port in two stages. First, it binds the host socket or starts a docker-proxy process to hold it. Second, it inserts a DNAT rule into the DOCKER iptables chain so traffic reaches the container. If either step fails because the port is already claimed, Docker returns an error.

The exact message tells you where the failure happened:

  • failed to bind host port 0.0.0.0:N/tcp: address already in use means the Linux socket bind failed. Another process or container genuinely holds the port.
  • failed to start userland proxy for port mapping means the docker-proxy process could not bind the port.
  • failed to set up container networking: driver failed programming external connectivity on endpoint ...: bind: address already in use means the socket bind succeeded, but Docker failed to program the iptables rule afterward. This often happens when an orphaned DNAT rule from a previous container still references the port, or when the firewall backend is inconsistent.

If you are running rootless Docker, the slirp4netns port driver in Engine 29.0.0 through 29.0.2 can return the same error even when no process holds the port.

Common causes

CauseWhat it looks likeFirst thing to check
Another host process bound the portss shows a non-Docker process listening on the host port`sudo ss -tlnp
Another container already published the portdocker ps shows a container mapped to the same host IP and port`docker ps -a –format ‘{{.Names}}\t{{.Ports}}’
Orphaned iptables DNAT ruless shows nothing on the port, but Docker still refuses to bind`sudo iptables -t nat -L DOCKER -n –line-numbers
Rootless Docker 29.0.x regressionNo process and no iptables rule holds the port; you run rootless Docker 29.0.0 through 29.0.2docker version --format '{{.Server.Version}}'
WSL2 iptables/nftables split brainRules exist in nftables but not iptables-legacy, or vice versaCompare iptables -S and nft list ruleset
Host service on a well-known portsystemd-resolved binds 127.0.0.53:53, blocking containers that need port 53systemctl status systemd-resolved

Quick checks

# Check which process holds the host port
sudo ss -tlnp | grep -E ':8080\b'

This lists the PID and process name. If the owner is not docker-proxy or a containerd-shim, a host service owns the port.

# Check which containers already use the host port
docker ps -a --format '{{.ID}}\t{{.Names}}\t{{.Ports}}' | grep -E ':8080\b'

Lists running and stopped containers with that mapping. A stopped container can still hold the binding in some network configurations.

# List Docker DNAT rules for the port
sudo iptables -t nat -L DOCKER -n --line-numbers | grep -E '8080\b'

Reveals orphaned iptables rules that survive after a container is removed. A rule with no corresponding container means an orphan.

# Check nftables rules if using the experimental backend
sudo nft list ruleset | grep -E '8080\b'

Docker Engine 29 can use an experimental nftables backend. If you only check iptables, you may miss rules programmed into nftables.

# Inspect a container's exact port binding syntax
docker inspect <container> --format '{{json .HostConfig.PortBindings}}'

Verifies whether you bound to 0.0.0.0 or 127.0.0.1. An explicit interface binding changes the conflict surface.

# Alternative process check with lsof
sudo lsof -i :8080

Useful when ss output is ambiguous or when the socket state filters it out.

# Check Docker Engine version for rootless regression
docker version --format 'Server: {{.Server.Version}}'

Rootless Docker 29.0.0 through 29.0.2 has a confirmed slirp4netns port binding regression that mimics a conflict.

# Verify whether systemd-resolved holds port 53
systemctl is-active systemd-resolved

On many distributions, systemd-resolved binds 127.0.0.53:53 and blocks containers like Pi-hole from publishing that port.

How to diagnose it

  1. Confirm the exact host port and interface. A binding to 127.0.0.1:8080 does not conflict with 0.0.0.0:8080 in the same way that two 0.0.0.0 bindings do. Check your publish syntax with docker inspect <container> --format '{{.HostConfig.PortBindings}}'.

  2. Find the current socket holder. Run sudo ss -tlnp | grep -E ':<PORT>\b' and sudo lsof -i :<PORT>. If a process is listed, that process owns the port. A container using --network host binds directly to the host socket and appears here under its own process name, not as docker-proxy.

  3. Check for container conflicts. Run docker ps -a and scan the PORTS column. A stopped container may still hold its port mapping depending on network state, and a running container on host networking binds directly without appearing in the published-ports list.

  4. Inspect iptables for orphaned DNAT rules. If ss and lsof return nothing, list the Docker NAT chain: sudo iptables -t nat -L DOCKER -n --line-numbers. Look for a DNAT rule referencing your port. If one exists and no corresponding container is running, the rule is orphaned. You can delete it by line number with sudo iptables -t nat -D DOCKER <N>, or restart dockerd to flush and reprogram all rules. Warning: deleting the wrong rule breaks NAT for running containers. If you are unsure of the line number, restart dockerd instead.

  5. Verify the firewall backend. On Docker Engine 29 with the experimental nftables backend, or on WSL2 where iptables-legacy and nftables diverge, Docker may program rules into a backend that does not match what your diagnostic tools show. Run both iptables -S and nft list ruleset and compare the outputs.

  6. Check for the rootless Docker 29.0.x regression. If you are running rootless Docker and the version is 29.0.0 through 29.0.2, the slirp4netns port driver may falsely report the port as in use.

  7. Distinguish from hairpin NAT. If the container starts successfully but its own application cannot reach localhost:PORT, you are looking at a hairpin NAT limitation on the bridge network, not a bind conflict.

Metrics and signals to monitor

SignalWhy it mattersWarning sign
Host listening sockets (ss -tlnp)Reveals which processes hold ports before a conflict blocks a deploymentExpected port appears with unexpected PID or process name
Container port mappings (docker ps)Shows which containers have already claimed host portsTwo containers mapped to the same host IP:port pair
iptables DOCKER chain rule countOrphaned rules accumulate after forced removals or daemon crashesRule count grows without corresponding container count
conntrack utilizationEvery published port creates NAT connections; exhaustion causes silent dropsnf_conntrack_count approaching nf_conntrack_max
Docker daemon error logsError patterns expose recurring port bind or programming failuresSustained address already in use or failed to start userland proxy messages
Container creation failuresPort conflicts manifest as create or start failures in automationCreation failure rate above baseline for a service

Fixes

If another host process holds the port

Stop or reconfigure the host service. If the service is required on that port, change the container’s host port to an unused one. For services that only need local access, bind to 127.0.0.1:<PORT>:<CONTAINER_PORT> instead of 0.0.0.0, which reduces collision surface and avoids LAN exposure.

If another container holds the port

Stop or remove the conflicting container. If you are using Docker Compose, run docker compose up -d --force-recreate after correcting the port mapping in the compose file. Do not rely on docker compose up alone to rebind ports if the previous container is still present.

If an orphaned iptables DNAT rule remains

List the rule with sudo iptables -t nat -L DOCKER -n --line-numbers, note the line number, and delete it with sudo iptables -t nat -D DOCKER <N>. If you are unsure which rules are safe to remove, restart dockerd. If live-restore is enabled, running containers survive the restart; otherwise, the restart disrupts all containers.

If the rootless Docker 29.0.x regression applies

If you are affected by this regression, monitor the upstream issue for a patch. There is no confirmed safe workaround at press time.

If WSL2 iptables or nftables split brain applies

Force iptables-legacy and restart dockerd:

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo systemctl restart docker

This assumes your WSL2 distribution uses systemd. If it does not, restart the Docker daemon through your init system.

If a firewall reload wiped Docker chains

Avoid flushing the nat table or replacing the entire ruleset while Docker is running. Use the DOCKER-USER chain for custom rules. Docker preserves this chain across its own updates, whereas a full ruleset replacement will destroy the DOCKER and DOCKER-ISOLATION chains.

If systemd-resolved blocks port 53

Prefer mapping the container to a different host port. If you must use port 53, stopping and disabling systemd-resolved will break local DNS resolution on many distributions. Plan for an alternative resolver before doing so.

Prevention

  • Pin explicit host ports. Do not rely on dynamic allocation for services that need a known port. Explicit mappings expose conflicts during code review.
  • Restrict bindings to localhost. Use 127.0.0.1:<HOST_PORT>:<CONTAINER_PORT> whenever external LAN access is unnecessary. This reduces both the attack surface and the collision surface.
  • Validate compose files before deployment. Check for duplicate host ports across services. A duplicate mapping may not fail until the second service starts.
  • Avoid firewall ruleset flushes. Program custom rules into DOCKER-USER instead of replacing the entire nat table.
  • Monitor iptables rule count and conntrack utilization. These resources have hard limits. Exhaustion causes silent connection drops, not just bind failures.
  • Keep Docker Engine updated. Engine 29 contains networking fixes, though rootless operators should verify current bug trackers before upgrading.

How Netdata helps

  • Docker daemon error logs. Alert on address already in use and failed to start userland proxy from dockerd.
  • Container creation failures. Spikes during rolling updates often signal port conflicts.
  • Host socket metrics. Track listening ports and conntrack saturation to catch exhaustion before it blocks container starts.
  • Docker events. Rapid restart loops can follow a failed port binding deployment.