Kubernetes PVC stuck Pending: storage class, provisioner, and quota
A PersistentVolumeClaim stuck in Pending is a storage-layer failure. Unlike a pod that is Pending due to CPU or memory, a PVC in Pending means the cluster cannot provision the volume. The cause usually sits in one of three layers: the StorageClass and its provisioner, namespace-level ResourceQuota limits, or topology and cloud constraints that prevent volume creation and attachment.
When a PVC stays unbound, dependent pods cannot start. StatefulSets stall, rolling updates hang, and storage-dependent applications remain offline.
What this means
A PVC is a request for storage. The control plane must either bind an existing PersistentVolume or trigger dynamic provisioning via a StorageClass. If the PVC remains Pending, binding or provisioning failed. No volume is allocated, and workloads referencing the claim cannot proceed.
Existing pods and volumes keep running, but new claims are blocked. The failure is usually in the control plane or an external dependency such as a CSI driver, cloud API, or namespace quota.
Common causes
| Cause | What it looks like | First thing to check |
|---|---|---|
| Missing or absent StorageClass | Events: no persistent volumes available for this claim and no storageClassName set | kubectl get storageclass |
| Provisioner or CSI driver failure | ProvisioningFailed events; controller logs show provisioning retries | CSI driver controller health in kube-system |
| ResourceQuota exhausted | PVC creation blocked; events cite quota exceeded for requests.storage or persistentvolumeclaims | kubectl describe resourcequota -n <ns> |
| Zone or topology mismatch | Pod scheduled in one zone, volume provisioned in another; scheduler events cite volume node affinity conflict | StorageClass volumeBindingMode and pod node affinity |
| Cloud provider quota or attachment limits | ProvisioningFailed referencing cloud API limits; attachment limit exceeded on node | kubectl get volumeattachments and cloud console |
Quick checks
These checks are read-only and safe to run during an incident.
# List all pending PVCs cluster-wide
kubectl get pvc --all-namespaces --field-selector=status.phase=Pending
# Describe a stuck PVC to read events and requested storage class
kubectl describe pvc <pvc-name> -n <namespace>
# Verify storage classes and default annotation
kubectl get storageclass
# Check namespace ResourceQuota utilization
kubectl get resourcequota -n <namespace>
kubectl describe resourcequota -n <namespace> <quota-name>
# Check for provisioning and attach failures in events
kubectl get events --all-namespaces --field-selector reason=ProvisioningFailed
kubectl get events --all-namespaces --field-selector reason=FailedAttachVolume
# Inspect VolumeAttachments to detect stuck or duplicate attaches
kubectl get volumeattachments
# Check registered CSI drivers against the StorageClass provisioner
kubectl get csidrivers
# Check scheduler events for pods using the PVC
kubectl get events -n <namespace> --field-selector reason=FailedScheduling
How to diagnose it
Confirm the PVC is the blocker. Check whether the pod consuming the PVC is stuck. Run
kubectl get pods -n <ns> --field-selector status.phase=Pendingand read pod events. If events reference the PVC name or a volume name, storage is the primary blocker. If not, the issue may be scheduling or resources. See Kubernetes pod stuck Pending: scheduling failures explained.Describe the PVC and read its events. Run
kubectl describe pvc <name> -n <ns>. Look for messages such aswaiting for a volume to be createdorno persistent volumes available for this claim. These events distinguish between a missing StorageClass, a provisioner that is not responding, and a quota rejection. Use them to decide whether to investigate the StorageClass, the provisioner, or namespace policy.Validate the StorageClass. Run
kubectl get storageclass. Verify that the PVC’s requestedstorageClassNameexists. If the PVC omittedstorageClassName, ensure a default StorageClass is configured. If the name is misspelled or the class was deleted, dynamic provisioning cannot start. Note thatstorageClassNameis immutable after creation; you must delete and recreate the PVC to change it.Check the provisioner health. If the StorageClass exists, inspect its
provisionerfield. The controller responsible for that provisioner must be running. Look inkube-systemor the driver’s namespace for controller pod restarts,CrashLoopBackOff, or resource pressure. Read the controller logs: CSI controllers usually run as a Deployment or StatefulSet with sidecars such as external-provisioner. Look for gRPC errors talking to the node plugin, cloud API rate limiting, or OOM kills. If the controller is unhealthy, it cannot create volumes and the PVC will wait indefinitely.Verify namespace ResourceQuota. Run
kubectl describe resourcequota -n <namespace>. Ifrequests.storageorpersistentvolumeclaimsis at the hard limit, the API server rejects the PVC at admission time. This happens even if the cluster has abundant physical storage. If a previous failed PVC creation attempt consumed the quota count, deleting the stuck Pending PVC frees the slot.Check topology and binding mode. If the StorageClass uses
volumeBindingMode: WaitForFirstConsumer, the PVC stays Pending until a pod referencing it is scheduled. Ensure the scheduler has actually placed the pod. If the pod usesnodeNamedirectly instead of node selectors, the scheduler is bypassed and the PVC never receives the topology hint it needs. If the class usesvolumeBindingMode: Immediate, the PVC binds as soon as the volume is provisioned. A pod pending with a volume node affinity conflict indicates a zone mismatch; align node labels with the StorageClassallowedTopologiesor switch toWaitForFirstConsumerfor future claims.Look for cloud-level limits. Check VolumeAttachments. If a node already has many volumes attached, the cloud provider may refuse new attachments. Cloud providers enforce per-node attachment limits that vary by instance type. A stuck VolumeAttachment from a previous pod or terminated node can also block new mounts on that node.
Metrics and signals to monitor
| Signal | Why it matters | Warning sign |
|---|---|---|
| PVC phase | Pending PVCs block dependent pods | Any production PVC in Pending longer than 5 minutes |
| ResourceQuota utilization | Quotas silently block PVC creation at admission | requests.storage or persistentvolumeclaims used above 80% of hard limit |
| Provisioning and attach events | Direct evidence of provisioner or cloud failures | Increasing ProvisioningFailed or FailedAttachVolume events |
| VolumeAttachments | Stuck attachments prevent rescheduling | VolumeAttachment remaining after its pod was deleted |
| Pending pods with PVC references | Correlates scheduling and storage problems | FailedScheduling events citing volume conflicts |
| Node disk attachment count | Cloud instances have hard attachment limits | Attachment count approaching the provider limit per node |
Fixes
If the StorageClass is missing or misconfigured
Create or correct the StorageClass. Ensure it has a valid provisioner field that matches the installed CSI driver or in-tree plugin. If the PVC omitted storageClassName and no default exists, either set a default StorageClass or patch the PVC to specify one explicitly. Note that storageClassName: "" requests a statically provisioned PV with an explicitly empty class, which is different from an omitted field.
If the provisioner is failing
Restarting the provisioner is a last resort. First, read the controller logs and check for cloud credential errors, misconfigured driver flags, or OOM kills. If the CSI driver is crashing, fix its configuration or raise its memory limits. After resolving the driver issue, you may need to delete and recreate the PVC if the provisioner exhausted its retry threshold on the original claim.
If ResourceQuota is exhausted
Raise the quota or reclaim existing claims. Identify unused PVCs in the namespace and delete them. If a previous failed PVC creation consumed the persistentvolumeclaims quota count, removing the stuck Pending PVC frees the slot. For multi-tenant clusters, review whether quota limits match actual workload needs before the next deployment.
If topology is blocking binding
When using volumeBindingMode: WaitForFirstConsumer, ensure the pod is actually being scheduled by the scheduler. If the pod sets nodeName directly, switch to nodeSelector or affinity rules so the scheduler can provide a topology hint. For Immediate binding with zone constraints, verify that node labels and allowed topologies align with the zones where the cloud provisioner can create volumes.
If cloud limits are hit
Request a volume attachment limit increase from your cloud provider, or spread workloads across more nodes. If a volume is stuck attached to a terminated node, force-detach it from the cloud console or wait for the attach-detach controller to time out. Do not manually delete VolumeAttachments unless you understand the node state and are prepared for data consistency risks.
Prevention
- Annotate a default StorageClass so PVCs without an explicit class still provision dynamically.
- Monitor ResourceQuota utilization per namespace and alert at 80% to avoid surprise creation failures.
- Use
volumeBindingMode: WaitForFirstConsumerfor topology-constrained cloud storage to avoid provisioning volumes in zones where no pod can run. - Set reasonable storage requests and review namespace quotas before deploying new StatefulSets.
- Stream Kubernetes events to persistent storage so you can review
ProvisioningFailedpatterns after events age out. - Validate StorageClass provisioner names during cluster upgrades, especially when migrating from in-tree plugins to CSI drivers.
How Netdata helps
- PVC status tracked alongside node storage saturation and CSI health metrics.
- ResourceQuota utilization charts show namespace-level consumption trends before quotas hard-block new claims.
- Disk latency and fsync metrics on control plane and worker nodes help distinguish provisioner issues from underlying storage I/O stalls.
- Pending pod counts paired with PVC phase alerts confirm whether a scheduling backlog is storage-related or resource-related.
Related guides
- For control plane issues that can cascade to storage controllers, see Kubernetes API server slow or unresponsive: causes and fixes.
- If nodes are failing while PVCs are stuck, see Kubernetes node NotReady: kubelet, runtime, and network diagnosis.
- When pods stay in ContainerCreating after PVC binding, see Kubernetes pod stuck ContainerCreating: volume, network, and image issues.
- For general scheduling problems not related to storage, see Kubernetes pod stuck Pending: scheduling failures explained.
- To ensure you are monitoring the right control plane signals, see Kubernetes monitoring checklist: the signals every production cluster needs.





