Docker container exits immediately: how to diagnose it
A container that exits immediately prints its ID during docker run, appears briefly in docker ps, then vanishes. docker ps -a lists it as Exited with an uptime measured in seconds. Unlike a restart loop, it stays stopped.
A Docker container is a Linux process wrapper, not a virtual machine. Its lifecycle is bound to PID 1 in the container’s namespace. When that process completes, crashes, or is killed, the container exits. An immediate exit means the process finished its work, failed to start, or was terminated during initialization. This guide gives you a diagnostic flow to find out why PID 1 terminated, interpret the exit code, and fix the root cause.
Common causes
| Cause | What it looks like | First thing to check |
|---|---|---|
| Foreground process completed normally | Exit code 0; container ran its command and finished | docker logs and the command/entrypoint in the image |
| Command or binary not found | Exit code 127; shell cannot find the executable | docker inspect --format='{{.Config.Cmd}} {{.Config.Entrypoint}}' |
| Permission denied on executable | Exit code 126; file exists but cannot execute | File permissions inside the image or mount |
| Application crash on startup | Exit code 1; stderr shows a stack trace or config error | docker logs --tail 100 |
| OOM kill during initialization | Exit code 137; logs may be empty or truncated | docker inspect --format='{{.State.OOMKilled}}' |
| Daemonized process inside container | Exit code 0; app forks to background and main shell exits | Whether the app is running in foreground mode |
| Shell-form ENTRYPOINT or CMD | Shell exits when the command fails; app does not run as PID 1 | docker inspect for exec vs shell form |
| Missing environment or secrets | Exit code 1; application logs mention missing variables | docker inspect --format='{{.Config.Env}}' |
Quick checks
# Quick checks for an immediately exited container
docker ps -a --filter name=<container_name> --format "table {{.Names}}\t{{.Status}}\t{{.RunningFor}}"
docker inspect --format '{{.State.ExitCode}}' <container_name>
docker inspect --format '{{.State.OOMKilled}}' <container_name>
docker logs --tail 100 <container_name>
docker inspect --format 'Cmd={{.Config.Cmd}} Entrypoint={{.Config.Entrypoint}}' <container_name>
docker run --rm -it --entrypoint sh <image>
sudo dmesg | grep -i "oom"
sudo journalctl -u docker.service -p err --since "10 minutes ago"
How to diagnose it
Check the exit code. Run
docker inspect --format '{{.State.ExitCode}}' <name>. Code 0 means the process completed successfully. Code 1 means a general application error. Code 126 means a permission problem. Code 127 means a missing binary. Codes 128+N mean the process was killed by signal N (137 for SIGKILL, 139 for SIGSEGV, 143 for SIGTERM). The exit code tells you whether to look at application logic, configuration, or the kernel.Cross-reference with OOMKilled. Run
docker inspect --format '{{.State.OOMKilled}}' <name>. If this istrueand the exit code is 137, the kernel OOM killer terminated the container because it exceeded its memory limit. Runsudo dmesg | grep -i oomfor confirmation. If OOMKilled isfalseand the exit code is 137, something else sent SIGKILL.Read the logs. Run
docker logs --tail 100 <name>. If the application crashed before flushing output or writes only to a file inside the container, the logs may be empty. For Python applications, missingPYTHONUNBUFFERED=1often hides startup errors because stdout is buffered by default. If logs are empty, run the container interactively with--entrypoint shand execute the command manually to see the error.Inspect the image command. Run
docker inspecton the container or image and checkConfig.CmdandConfig.Entrypoint. A shell-form command such asCMD node server.jswraps the process in/bin/sh -c, making/bin/shPID 1. If the shell finishes or the process daemonizes, the container exits. An exec-form command such asCMD ["node", "server.js"]runs the binary directly as PID 1.Verify foreground mode. If the application is a server that daemonizes by default (for example, NGINX, Apache, or gunicorn with
-D), the main process forks to background and the initial process exits. The container dies immediately because PID 1 finished. Check the application flags and ensure it runs in the foreground (for example,nginx -g 'daemon off;'orgunicorn --foreground).Check for missing dependencies. Exit code 127 or errors in logs about missing files often mean the container filesystem lacks a binary, library, or mounted configuration file. Run an interactive shell in the image and verify paths and permissions.
Review resource limits. If the container is created but dies instantly with exit code 137 and OOMKilled is true, review the memory limit. Even if the application normally fits, initialization spikes (JVM heap allocation, large imports) can exceed the limit immediately. Temporarily raise the limit to test. Do not remove limits entirely on production hosts if the application behavior is unknown.
Metrics and signals to monitor
| Signal | Why it matters | Warning sign |
|---|---|---|
| Container exit code | Classifies the failure mechanism | Any non-zero code for a long-running service |
| OOMKilled status | Distinguishes OOM from external SIGKILL | true with exit code 137 |
| Container restart count | Reveals crash loops masked by restart policies | Non-zero restart count after a fresh start |
| Container start-to-exit duration | Distinguishes instant failure from slow crash | StartedAt and FinishedAt in docker inspect are seconds apart |
| Docker daemon errors | Catches runtime or storage driver issues | Errors in journalctl -u docker.service |
| Host memory pressure | System-wide OOM can kill containers without individual limits | dmesg OOM messages or high memory usage |
| Container CPU throttling | Can cause startup timeouts that trigger kills | nr_throttled increasing in cgroup cpu.stat |
Fixes
If the cause is a missing or misconfigured command
Use docker inspect <image> to confirm Cmd and Entrypoint, then verify the binary exists inside the image with an interactive shell. If using shell form, switch to exec form so the binary receives Unix signals directly and runs as PID 1. Verify that volume mounts or secrets required at startup are present.
If the cause is a daemonizing process
Reconfigure the application to stay in the foreground. Examples include nginx -g 'daemon off;', apachectl -D FOREGROUND, or removing -D from gunicorn. The process running as PID 1 must not fork to background and exit. If you cannot change the application, use a lightweight init system such as tini, but ensure the final process remains in the foreground.
If the cause is an OOM kill
Increase the container memory limit or reduce the application’s memory footprint. For JVM workloads, ensure -Xmx leaves headroom for metaspace, thread stacks, and native memory. A common practice is setting -Xmx to roughly 75% of the container limit. Check dmesg to confirm the OOM killer targeted your container’s cgroup.
If the cause is an application crash
Fix the underlying code or configuration. Use docker run --rm -it --entrypoint sh <image> to reproduce the startup sequence manually. Check environment variables, secrets mounts, and network dependencies. If the application writes logs to a file inside the container, read that file directly or redirect output to stdout/stderr.
If the cause is empty or buffered logs
Set unbuffered output for interpreted languages. For example, set ENV PYTHONUNBUFFERED=1 in the Dockerfile. Redirect application logs to stdout/stderr instead of filesystem paths inside the container so docker logs captures startup errors.
Prevention
- Use exec form for ENTRYPOINT and CMD. This avoids
/bin/shwrapper issues and ensures the application receives SIGTERM correctly. - Run applications in the foreground. Verify that your PID 1 process does not daemonize. Document required flags in your Dockerfile or orchestration manifests.
- Set log output to stdout/stderr and configure unbuffered modes. Do not rely on filesystem logs inside the container for startup errors.
- Configure health checks for long-running containers. Health checks catch runtime dependency failures and hangs that might eventually trigger a restart loop.
- Set appropriate resource limits. Size memory limits to account for startup spikes, not just steady-state usage.
- Validate images in CI. Run the container in your build pipeline and check that it stays running for a minimum duration before pushing the image to a registry.
- Monitor exit codes and OOMKilled status proactively. Do not wait for disk exhaustion from log-filled crash loops.
How Netdata helps
Netdata monitors container state changes and cgroup metrics in real time. Instead of manually polling docker inspect, correlate the exit with resource pressure or daemon events using:
- Container state charts show when a container moves to
exitedand track abnormal termination counts. - Memory utilization vs. limit identifies OOM risk before the kernel kills the container.
- Docker daemon metrics expose engine runtime errors and response latency that can prevent a container from starting.
- CPU throttling metrics reveal whether CFS bandwidth limits slow startup enough to trigger health-check kills or timeouts.
Related guides
- If commands hang instead of returning, see Docker commands hang.
- For containers stuck in a restart loop, see Docker container keeps restarting: causes, checks, and fixes.
- If the exit code is 137, see Docker exit code 137: OOMKilled or SIGKILL? to distinguish memory pressure from external kills.
- For resource-related crashes, see Docker container high memory usage: how to diagnose it and Docker container high CPU usage: causes and fixes.
- For daemon-level issues, see Docker daemon not responding: how to troubleshoot a hung dockerd.





