Docker log rotation: preventing json-file logs from filling disk
Docker’s default json-file log driver appends every line of container stdout and stderr to a JSON file on the host under /var/lib/docker/containers/<id>/. Without size limits, that file grows monotonically. A single verbose container can consume tens of gigabytes, and because the driver is the default, this often happens silently until /var/lib/docker fills. At that point image pulls fail, container creates are rejected, and the daemon may hang on storage operations. This guide covers how to cap log files with daemon.json and per-container log-opt overrides, verify the caps are working, and choose a different driver when json-file is not appropriate.
What this means
In production, json-file is convenient because docker logs works out of the box and the output is machine-readable JSON. The tradeoff is that the daemon keeps every log line on disk unless you tell it otherwise. The logs are stored alongside the container’s writable layer, so they feed directly into the container disk usage reported by docker system df. Exited containers also retain their logs until the container is removed. When disk exhaustion hits, the failure mode is a cascade: writes slow, daemon latency spikes, image pulls and container creates fail, and in extreme cases dockerd becomes unresponsive while containers continue running. Rotation is not a performance tweak. It is an availability requirement for any node using json-file.
Common causes
| Cause | What it looks like | First thing to check |
|---|---|---|
| No log-opts in daemon.json | Container log files grow without bound across the host | cat /etc/docker/daemon.json and docker info | grep -i "logging driver" |
| Per-container override without limits | One container’s directory is far larger than the rest | docker inspect --format '{{json .HostConfig.LogConfig}}' <id> |
| High-volume application logging | Disk growth tracks traffic spikes or batch jobs | docker logs --tail 100 <id> |
| Exited containers not pruned | Disk usage climbs while the running container count stays flat | docker ps -a --filter status=exited |
| json-file used for centralised aggregation | Logs are written to disk and then read by a node agent, doubling I/O | docker info | grep -i "logging driver" |
Quick checks
# Check the active default logging driver
docker info | grep -i "logging driver"
# Inspect daemon configuration for log options
cat /etc/docker/daemon.json
# Check a container's specific log configuration
docker inspect --format '{{json .HostConfig.LogConfig}}' <container_id>
# See container directory sizes, which include logs and writable layers
du -sh /var/lib/docker/containers/*/ 2>/dev/null | sort -rh | head -10
# List stopped containers that still retain logs and layers
docker ps -a --filter status=exited --format '{{.ID}} {{.Names}}'
# Check Docker-wide disk usage and reclaimable space
docker system df -v
If daemon.json is missing or has no log-opts block, the host has no default rotation. If a container’s LogConfig shows no max-size, that container is unprotected regardless of the daemon default.
How to diagnose it
Confirm the driver and locate the largest consumers. Run
docker info | grep -i "logging driver"anddu -sh /var/lib/docker/containers/*/. If the driver isjson-fileand individual container directories are hundreds of megabytes or larger, logs are the likely culprit. This tells you whether the fix is rotation or a driver migration. Next, inspect the daemon configuration.Inspect daemon.json for log-opts. Look for a
log-optskey containingmax-sizeandmax-file. Why: the daemon default applies to every container created after the daemon reads the policy. Result: if these keys are missing, nothing limits log growth. Next: add the policy and plan a daemon restart.Inspect existing containers for overrides. Run
docker inspect --format '{{json .HostConfig.LogConfig}}'on large containers. Why: per-container--log-optflags override the daemon default. Result: if a container specifies its own driver or omitsmax-size, it will grow without the daemon safeguard. Next: recreate the container with explicit limits.Verify container creation time relative to policy changes. Check when the container was created versus when
daemon.jsonwas last modified and the daemon restarted. Why: Docker applies logging configuration at container creation time; changes are not retroactive. Result: containers started before the policy change continue under the old rules. Next: recreate stale containers after the daemon restart.Audit exited containers. Run
docker ps -a --filter status=exited. Why: stopped containers retain log files until removed. Result: a large population of exited containers can explain disk pressure even when running containers look fine. Next: prune exited containers after confirming you do not need their logs.Check application log velocity. Run
docker logs --tail 100 <container_id>. Why: if an application emits megabytes per minute, a smallmax-sizemay rotate too frequently, while a large one may still exhaust disk ifmax-fileis high. Result: you may need to reduce application verbosity or switch to a streaming driver such asfluentdorsyslog.
Metrics and signals to monitor
| Signal | Why it matters | Warning sign |
|---|---|---|
| Docker disk usage by containers | json-file logs live inside container directories | Any single container directory > 5 GB |
| Exited container count | Stopped containers retain logs until removed | Count growing without automated cleanup |
| Docker disk usage growth rate | Unbounded logs can exhaust space faster than expected | Growth > 1 GB/day without workload change |
| Disk free on /var/lib/docker | Rotation is a safeguard, not a guarantee | < 20 % free |
| Docker daemon response latency | Disk pressure from logs degrades daemon performance | Sustained latency > 500 ms |
Fixes
If the cause is missing rotation configuration
Create or edit /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Apply the change with a daemon restart:
# Disruptive: restarts dockerd and briefly affects all containers
systemctl restart docker
After the restart, only new containers pick up the policy. Existing containers continue with their previous configuration until recreated.
If the cause is per-container overrides without limits
Recreate the container with explicit log options:
docker run --log-driver json-file \
--log-opt max-size=10m \
--log-opt max-file=3 \
...
Docker applies log configuration at creation time. You cannot update log driver options on a running container.
If the cause is an unsuitable log driver
If you are shipping logs to a central system, writing everything to json-file on disk may be redundant. Consider an alternative:
| Driver | Stores logs on host disk? | Rotation handled by | Best for |
|---|---|---|---|
| json-file | Yes, in container directory | Docker daemon via max-size/max-file | Local debugging, small deployments |
| local | Yes, in a binary format | Docker daemon with built-in rotation | Production nodes that need minimal overhead |
| journald | No, forwarded to systemd journal | journald configuration | Systems already centralising the systemd journal |
| syslog | No, forwarded to syslog daemon | syslog daemon configuration | Traditional syslog infrastructure |
| fluentd | No, forwarded to Fluentd | Fluentd pipeline | Environments with existing Fluentd aggregation |
Switching the daemon default requires editing daemon.json and restarting dockerd. Existing containers remain on their original driver until recreated.
If the cause is application log volume
Reduce verbosity at the application level if possible. If the workload legitimately emits high volumes, avoid json-file entirely and stream directly to fluentd or syslog so the host disk is not the bottleneck.
If the cause is accumulated exited containers
Remove stopped containers and reclaim their logs and writable layers:
# Destructive: removes all stopped containers
docker container prune -f
Warning: This deletes stopped containers and their logs. Confirm you do not need the data. For targeted cleanup, remove specific containers by ID or name.
Prevention
- Set
log-optsindaemon.jsonas part of host provisioning. Do not rely on operators to remember--log-optflags. - Monitor Docker disk usage by containers and alert at 70% of the
/var/lib/dockerfilesystem. Waiting until 95% leaves no room for cleanup operations. - Schedule automated pruning of exited containers.
- Evaluate log driver choice during service design. json-file is convenient for local debugging but becomes a liability on high-traffic nodes.
- Test rotation by starting a verbose container and verifying that total log size stays within
max-sizemultiplied bymax-file.
How Netdata helps
- Netdata tracks Docker disk usage by category, so you can see when container layers and logs dominate the total.
- Container state counts surface exited containers that are quietly consuming disk.
- Host disk space monitoring on the
/var/lib/dockerfilesystem gives early warning before rotation policies are stressed. - Correlating disk usage spikes with container start events helps distinguish log growth from image pulls or build cache.
Related guides
- Docker logs taking too much disk space: how to fix log growth
- Docker disk space full: how to troubleshoot /var/lib/docker
- Docker daemon not responding: how to troubleshoot a hung dockerd
- Docker container keeps restarting: causes, checks, and fixes
- Docker monitoring checklist: the signals every production host needs




