Kubernetes PVC stuck Pending: storage class, provisioner, and quota

A PersistentVolumeClaim stuck in Pending is a storage-layer failure. Unlike a pod that is Pending due to CPU or memory, a PVC in Pending means the cluster cannot provision the volume. The cause usually sits in one of three layers: the StorageClass and its provisioner, namespace-level ResourceQuota limits, or topology and cloud constraints that prevent volume creation and attachment.

When a PVC stays unbound, dependent pods cannot start. StatefulSets stall, rolling updates hang, and storage-dependent applications remain offline.

What this means

A PVC is a request for storage. The control plane must either bind an existing PersistentVolume or trigger dynamic provisioning via a StorageClass. If the PVC remains Pending, binding or provisioning failed. No volume is allocated, and workloads referencing the claim cannot proceed.

Existing pods and volumes keep running, but new claims are blocked. The failure is usually in the control plane or an external dependency such as a CSI driver, cloud API, or namespace quota.

Common causes

CauseWhat it looks likeFirst thing to check
Missing or absent StorageClassEvents: no persistent volumes available for this claim and no storageClassName setkubectl get storageclass
Provisioner or CSI driver failureProvisioningFailed events; controller logs show provisioning retriesCSI driver controller health in kube-system
ResourceQuota exhaustedPVC creation blocked; events cite quota exceeded for requests.storage or persistentvolumeclaimskubectl describe resourcequota -n <ns>
Zone or topology mismatchPod scheduled in one zone, volume provisioned in another; scheduler events cite volume node affinity conflictStorageClass volumeBindingMode and pod node affinity
Cloud provider quota or attachment limitsProvisioningFailed referencing cloud API limits; attachment limit exceeded on nodekubectl get volumeattachments and cloud console

Quick checks

These checks are read-only and safe to run during an incident.

# List all pending PVCs cluster-wide
kubectl get pvc --all-namespaces --field-selector=status.phase=Pending
# Describe a stuck PVC to read events and requested storage class
kubectl describe pvc <pvc-name> -n <namespace>
# Verify storage classes and default annotation
kubectl get storageclass
# Check namespace ResourceQuota utilization
kubectl get resourcequota -n <namespace>
kubectl describe resourcequota -n <namespace> <quota-name>
# Check for provisioning and attach failures in events
kubectl get events --all-namespaces --field-selector reason=ProvisioningFailed
kubectl get events --all-namespaces --field-selector reason=FailedAttachVolume
# Inspect VolumeAttachments to detect stuck or duplicate attaches
kubectl get volumeattachments
# Check registered CSI drivers against the StorageClass provisioner
kubectl get csidrivers
# Check scheduler events for pods using the PVC
kubectl get events -n <namespace> --field-selector reason=FailedScheduling

How to diagnose it

  1. Confirm the PVC is the blocker. Check whether the pod consuming the PVC is stuck. Run kubectl get pods -n <ns> --field-selector status.phase=Pending and read pod events. If events reference the PVC name or a volume name, storage is the primary blocker. If not, the issue may be scheduling or resources. See Kubernetes pod stuck Pending: scheduling failures explained.

  2. Describe the PVC and read its events. Run kubectl describe pvc <name> -n <ns>. Look for messages such as waiting for a volume to be created or no persistent volumes available for this claim. These events distinguish between a missing StorageClass, a provisioner that is not responding, and a quota rejection. Use them to decide whether to investigate the StorageClass, the provisioner, or namespace policy.

  3. Validate the StorageClass. Run kubectl get storageclass. Verify that the PVC’s requested storageClassName exists. If the PVC omitted storageClassName, ensure a default StorageClass is configured. If the name is misspelled or the class was deleted, dynamic provisioning cannot start. Note that storageClassName is immutable after creation; you must delete and recreate the PVC to change it.

  4. Check the provisioner health. If the StorageClass exists, inspect its provisioner field. The controller responsible for that provisioner must be running. Look in kube-system or the driver’s namespace for controller pod restarts, CrashLoopBackOff, or resource pressure. Read the controller logs: CSI controllers usually run as a Deployment or StatefulSet with sidecars such as external-provisioner. Look for gRPC errors talking to the node plugin, cloud API rate limiting, or OOM kills. If the controller is unhealthy, it cannot create volumes and the PVC will wait indefinitely.

  5. Verify namespace ResourceQuota. Run kubectl describe resourcequota -n <namespace>. If requests.storage or persistentvolumeclaims is at the hard limit, the API server rejects the PVC at admission time. This happens even if the cluster has abundant physical storage. If a previous failed PVC creation attempt consumed the quota count, deleting the stuck Pending PVC frees the slot.

  6. Check topology and binding mode. If the StorageClass uses volumeBindingMode: WaitForFirstConsumer, the PVC stays Pending until a pod referencing it is scheduled. Ensure the scheduler has actually placed the pod. If the pod uses nodeName directly instead of node selectors, the scheduler is bypassed and the PVC never receives the topology hint it needs. If the class uses volumeBindingMode: Immediate, the PVC binds as soon as the volume is provisioned. A pod pending with a volume node affinity conflict indicates a zone mismatch; align node labels with the StorageClass allowedTopologies or switch to WaitForFirstConsumer for future claims.

  7. Look for cloud-level limits. Check VolumeAttachments. If a node already has many volumes attached, the cloud provider may refuse new attachments. Cloud providers enforce per-node attachment limits that vary by instance type. A stuck VolumeAttachment from a previous pod or terminated node can also block new mounts on that node.

Metrics and signals to monitor

SignalWhy it mattersWarning sign
PVC phasePending PVCs block dependent podsAny production PVC in Pending longer than 5 minutes
ResourceQuota utilizationQuotas silently block PVC creation at admissionrequests.storage or persistentvolumeclaims used above 80% of hard limit
Provisioning and attach eventsDirect evidence of provisioner or cloud failuresIncreasing ProvisioningFailed or FailedAttachVolume events
VolumeAttachmentsStuck attachments prevent reschedulingVolumeAttachment remaining after its pod was deleted
Pending pods with PVC referencesCorrelates scheduling and storage problemsFailedScheduling events citing volume conflicts
Node disk attachment countCloud instances have hard attachment limitsAttachment count approaching the provider limit per node

Fixes

If the StorageClass is missing or misconfigured

Create or correct the StorageClass. Ensure it has a valid provisioner field that matches the installed CSI driver or in-tree plugin. If the PVC omitted storageClassName and no default exists, either set a default StorageClass or patch the PVC to specify one explicitly. Note that storageClassName: "" requests a statically provisioned PV with an explicitly empty class, which is different from an omitted field.

If the provisioner is failing

Restarting the provisioner is a last resort. First, read the controller logs and check for cloud credential errors, misconfigured driver flags, or OOM kills. If the CSI driver is crashing, fix its configuration or raise its memory limits. After resolving the driver issue, you may need to delete and recreate the PVC if the provisioner exhausted its retry threshold on the original claim.

If ResourceQuota is exhausted

Raise the quota or reclaim existing claims. Identify unused PVCs in the namespace and delete them. If a previous failed PVC creation consumed the persistentvolumeclaims quota count, removing the stuck Pending PVC frees the slot. For multi-tenant clusters, review whether quota limits match actual workload needs before the next deployment.

If topology is blocking binding

When using volumeBindingMode: WaitForFirstConsumer, ensure the pod is actually being scheduled by the scheduler. If the pod sets nodeName directly, switch to nodeSelector or affinity rules so the scheduler can provide a topology hint. For Immediate binding with zone constraints, verify that node labels and allowed topologies align with the zones where the cloud provisioner can create volumes.

If cloud limits are hit

Request a volume attachment limit increase from your cloud provider, or spread workloads across more nodes. If a volume is stuck attached to a terminated node, force-detach it from the cloud console or wait for the attach-detach controller to time out. Do not manually delete VolumeAttachments unless you understand the node state and are prepared for data consistency risks.

Prevention

  • Annotate a default StorageClass so PVCs without an explicit class still provision dynamically.
  • Monitor ResourceQuota utilization per namespace and alert at 80% to avoid surprise creation failures.
  • Use volumeBindingMode: WaitForFirstConsumer for topology-constrained cloud storage to avoid provisioning volumes in zones where no pod can run.
  • Set reasonable storage requests and review namespace quotas before deploying new StatefulSets.
  • Stream Kubernetes events to persistent storage so you can review ProvisioningFailed patterns after events age out.
  • Validate StorageClass provisioner names during cluster upgrades, especially when migrating from in-tree plugins to CSI drivers.

How Netdata helps

  • PVC status tracked alongside node storage saturation and CSI health metrics.
  • ResourceQuota utilization charts show namespace-level consumption trends before quotas hard-block new claims.
  • Disk latency and fsync metrics on control plane and worker nodes help distinguish provisioner issues from underlying storage I/O stalls.
  • Pending pod counts paired with PVC phase alerts confirm whether a scheduling backlog is storage-related or resource-related.