Docker volume cleanup: finding and removing orphaned volumes
Run docker system df and the Local Volumes line keeps growing. Run docker system prune and you reclaim images and build cache, but the volume count barely drops. A few weeks later the disk alert fires again.
Data persists in /var/lib/docker/volumes/ after its consumer is long gone. These volumes waste disk, complicate capacity planning, and can hide sensitive data in forgotten corners of the filesystem.
This guide covers how Docker volumes become orphaned, how to distinguish unused volumes from named data stores that should stay, and how to remove them safely.
What this means
An orphaned volume is a Docker-managed volume that exists but is not referenced by any container. Unlike writable layers, volumes persist beyond a container’s lifecycle by default. When you run docker rm, its anonymous and named volumes survive unless you explicitly delete them.
Since Docker 22.06, docker volume prune removes only anonymous unused volumes by default. Named volumes require the --all flag.
A volume attached to a stopped container is not orphaned. Docker counts exited containers as references, so a volume may appear unused while still being held by a container in exited state. Remove the container before you can delete the volume.
Common causes
| Cause | What it looks like | First thing to check |
|---|---|---|
Containers removed without -v | docker volume ls is large but docker ps -a shows few containers | Whether existing volumes map to stopped containers |
Expecting docker volume prune to clean named volumes | Prune completes but named volumes remain | Docker version and whether --all was used |
| Docker Compose anonymous volume misclassification | Old hash-like volume names accumulate after service recreates | docker volume ls for Compose project labels |
| Daemon stale references after restart | Volumes appear orphaned immediately after dockerd restart | Daemon logs for volume driver errors |
| CI/CD or batch jobs creating short-lived containers | Anonymous volumes grow on build or test runners | Container creation and deletion rate on the host |
Quick checks
# List volumes not referenced by any container
docker volume ls --filter dangling=true
Any volume listed here has no current consumer.
# Show Docker disk usage breakdown by type
docker system df
Check the Local Volumes line. If it is the largest component or growing faster than images, volumes are your target.
# Cross-reference volumes against all containers including stopped
docker volume ls -q | while read vol; do
count=$(docker ps -a --filter volume="$vol" -q | wc -l)
[ "$count" -eq 0 ] && echo "$vol (orphaned)"
done
This confirms that a volume has zero references. It is slow on hosts with hundreds of volumes.
# Check volume sizes directly on the host filesystem
sudo du -sh /var/lib/docker/volumes/*
Use this when docker system df is too slow or you need exact directory sizes.
# Inspect a specific volume for references and driver info
docker volume inspect <volume_name>
Check the Labels and Mountpoint fields to determine if the volume is anonymous, named, or managed by a custom driver.
# Check which containers mount a specific volume
docker ps -a --filter volume=<volume_name> --format '{{.ID}} {{.Names}} {{.Status}}'
If this returns results, the volume is not orphaned. Remove these containers first.
# Verify Docker version to know default prune behavior
docker version --format '{{.Server.Version}}'
Versions 22.06 and later default docker volume prune to anonymous volumes only.
# Review daemon logs for volume-related errors
journalctl -u docker.service --since "1 hour ago" | grep -iE "volume|error"
On systemd hosts, stale references or plugin failures often leave warnings here.
How to diagnose it
Confirm volumes are the disk consumer. Run
docker system df. If theLocal Volumessection dominates usage or reclaimable space, focus on volume cleanup.List candidate orphans. Run
docker volume ls --filter dangling=true.Verify no stopped containers hold references. For each candidate, run
docker ps -a --filter volume=<name>. If a stopped container appears, the volume is not dangling. Remove the container before the volume.Distinguish anonymous from named. Anonymous volumes typically have long random names. Named volumes have human-readable names. Anonymous orphans are safe to prune with
docker volume prune. Named orphans requiredocker volume prune --all.Inspect contents before deleting unknown volumes. Mount the volume temporarily to verify it is not a forgotten database or config store:
# Inspect volume contents before removal docker run --rm -v <volume_name>:/data alpine ls -la /dataCheck for stale daemon references. If references seem inconsistent after an unclean restart, a clean daemon restart may reconcile state.
WARNING: Restarting
dockerdstops all containers unless live-restore is enabled. Plan for disruption.Correlate with container churn. If volume count tracks with CI/CD or batch job frequency, the root cause is missing
-vflags in teardown scripts.
Metrics and signals to monitor
| Signal | Why it matters | Warning sign |
|---|---|---|
| Orphaned volumes count | Unused volumes waste disk and may contain sensitive data | dangling=true count greater than 0 and increasing |
| Filesystem usage for /var/lib/docker | Persistent data growth leads to disk exhaustion | Volume usage greater than 75% of available disk or growing faster than 5GB/day |
| Container creation/deletion rate | High churn produces orphaned anonymous volumes if -v is not used | Create/destroy rate greater than 10x baseline without cleanup |
| Docker daemon responsiveness | Stale volume references after restart require a healthy API to reconcile | /_ping latency greater than 500ms or volume errors in daemon logs |
| Volume driver errors | Plugin or driver failures can leave volumes in limbo | Error-level entries in daemon logs referencing volumes |
Fixes
If containers were removed without volume cleanup
Remove containers with the -v flag to also delete anonymous volumes:
# Remove a container and its anonymous volumes
docker container rm -v <container_id>
For Compose environments, use docker compose down -v to remove both named and anonymous volumes declared in the file. docker compose rm -v does not remove named volumes declared in the compose file.
If named volumes survive prune
Modern Docker defaults to protecting named volumes. Run:
# Remove all unused volumes, including named ones
docker volume prune --all
This requires API version 1.42 (Docker 22.06+). Older daemons do not support this flag. Verify the daemon version before using it in automation.
If a stopped container blocks removal
docker volume rm --force does not override the in-use check. It only skips the confirmation prompt. Identify the consumer:
# Find containers referencing the volume
docker ps -a --filter volume=<volume_name> --format '{{.ID}}'
Remove the container, then the volume:
# Remove stopped container and orphaned volume
docker rm <container_id>
docker volume rm <volume_name>
If stale daemon references hide true orphans
After an unclean restart, volumes may appear unused while the daemon still tracks them.
# Restart Docker daemon (containers survive if live-restore is enabled)
systemctl restart docker
WARNING: This stops all containers unless live-restore is enabled. Plan for disruption.
Then re-run docker volume ls --filter dangling=true and remove any volumes that remain unreferenced.
If bind mounts are mistaken for volumes
docker volume prune never removes bind mounts. Verify the mount type:
# Check mount type for a container
docker inspect <container_id> --format '{{json .Mounts}}'
If "Type": "bind", reduce disk usage by cleaning the host path directly, not through Docker volume commands.
Prevention
- Remove transient containers with
-vso anonymous volumes do not accumulate. - Use
docker compose down -vin CI/CD teardown scripts instead ofdocker compose rmor plaindocker rm. - Monitor
docker volume ls --filter dangling=trueand alert on nonzero counts in production. - Set filesystem usage alerts for
/var/lib/dockerat 70% to catch growth before it blocks operations. - Document named volumes that are intentionally persistent so on-call engineers do not prune them.
- Avoid scripting
docker volume prune --allblindly in production; always verify that critical named volumes are mounted by running containers.
How Netdata helps
- Disk usage monitoring on the filesystem hosting
/var/lib/dockerspots orphaned volume growth before disk-full alerts fire. - Container churn metrics identify workloads or CI pipelines that leak volumes through high creation and deletion rates.
- Per-container volume mount visibility traces a suspicious volume back to its last consumer without manual CLI cross-referencing.
- Filesystem utilization alerts for
/var/lib/dockergive early warning when orphaned volumes push disk usage toward critical thresholds.
Related guides
- Docker disk space full: how to troubleshoot /var/lib/docker
- Docker daemon not responding: how to troubleshoot a hung dockerd
- Docker container keeps restarting: causes, checks, and fixes
- Docker container exits immediately: how to diagnose it
- Docker commands hang: docker ps, inspect, and exec freezes
- Docker container high memory usage: how to diagnose it
- Docker container memory leak: how to find one and prove it
- Docker CPU throttling: the hidden cause of container latency
- Docker container running but unhealthy: how to diagnose health check failures
- Docker DNS not working inside containers
- Docker container high CPU usage: causes and fixes
- Docker exit code 1: application errors and how to find them





