Description
pvcbench is a high-performance benchmarking tool for the Kubernetes PVC protection controller (kubernetes.io/pvc-protection). It uses realistic StatefulSet-based workloads (volumeClaimTemplates + persistentVolumeClaimRetentionPolicy) to measure latency between Pod deletion and actual PVC removal.
Source code is available on GitHub.
Table of contents
Open Table of contents
Prerequisites
- Go 1.24+
- Minikube
- kubectl configured to your minikube cluster
- Helm (for monitoring stack)
Minikube setup
Start minikube with sufficient resources. A multi-node setup is recommended to better simulate a real-world cluster:
minikube start --kubernetes-version=v1.30.11 --cpus=4 --memory=8192 --nodes=3 \
--extra-config=kubelet.max-pods=250 \
--addons=default-storageclass \
--addons=storage-provisioner \
--extra-config=controller-manager.bind-address=0.0.0.0 \
--extra-config=scheduler.bind-address=0.0.0.0 \
--extra-config=etcd.listen-metrics-urls=http://0.0.0.0:2381
# Ensure you are in the minikube context
kubectl config use-context minikube
Confirm the default StorageClass exists (minikube provides one by default):
kubectl get storageclass
Monitoring setup
We use kube-prometheus-stack to scrape metrics from the cluster control plane (kube-controller-manager, kube-apiserver) and the pvcbench tool itself.
Install Prometheus stack
-
Add Helm repo:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update -
Install with minikube-specific values (enables control-plane scraping) into the
monitoringnamespace:kubectl create namespace monitoring helm install prometheus prometheus-community/kube-prometheus-stack \ -n monitoring \ -f deploy/monitoring/kube-prometheus-stack-values-minikube.yaml
The values file sets a 5s scrape interval for faster visibility during short benchmarks.
Configure scraping for pvcbench
The values file scrapes host.docker.internal:8080, which works on macOS with the Docker driver. If you are on Linux or using another driver, update the additionalScrapeConfigs target to one of:
host.minikube.internal:8080(recent minikube versions)- The value of
minikube ipwith an exposed host port
Import dashboards
- Import dashboards from the
dashboards/directory:kubectl create configmap pvcbench-dashboards \ -n monitoring \ --from-file=dashboards/pvc-protection.json \ --from-file=dashboards/apiserver-pressure.json \ --from-file=dashboards/run-timeline.json - Label created ConfigMap:
kubectl label configmap pvcbench-dashboards -n monitoring grafana_dashboard=1
Configure port forwarding for Grafana
-
Port-forward Grafana:
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80 # Default login: admin / admin -
Open Grafana: http://localhost:3000 using
admin/admincredentials -
Dashboards to open:
PVC Protection Controller Performance: monitor controller workqueue and PVC latency.API Server Pressure: monitor API server LIST QPS and latency.Run Timeline: correlate tool phases with cluster behavior.
-
(Optional) Port-forward Prometheus for raw PromQL testing:
kubectl port-forward -n monitoring svc/prometheus-kube-prometheus-prometheus 9090:9090 -
(Optional) Test that Prometheus has access to all the targets: http://localhost:9090/targets
Tool usage
Benchmark commands
benchmark
Runs a single scenario in an isolated namespace (pvcbench-<timestamp>).
# Staggered: scale down in batches with a pause between steps
go run ./cmd/pvcbench benchmark --scenario staggered --replicas 100 --delete-batch-size 10 --delete-interval 5s
# Burst: scale from 100 to 0 immediately (worst-case controller load)
go run ./cmd/pvcbench benchmark --scenario burst --replicas 100 --pvc-size 100Mi
Each scenario creates a StatefulSet (pods + PVCs) first, waits for readiness, then applies the chosen scale-down pattern and polls PVCs with GET requests (default 100ms interval) to measure delete latency.
cleanup
Deletes all benchmark namespaces created by the tool (prefixed pvcbench-).
go run ./cmd/pvcbench cleanup
# Force deletion by removing finalizers (use with caution):
go run ./cmd/pvcbench cleanup --force
Makefile shortcuts
Use the Makefile for quick runs:
make benchmark-burst
make benchmark-staggered
make benchmark-suite
make cleanup-benchmark-namespaces
make test
Override defaults with variables:
make benchmark-burst REPLICAS=50 PVC_SIZE=10Mi
make benchmark-staggered REPLICAS=80 DELETE_BATCH_SIZE=20 DELETE_INTERVAL=3s PVC_POLL_INTERVAL=250ms
Results
Kubernetes 1.30
=== Benchmark Summary ===
Total Duration: 28.510834167s
Scenario: burst
Replicas: 200
PVC Size: 100Mi
Kubernetes Version: v1.30.11
PVC Poll Interval: 100ms
PVC Delete Latency:
Count: 200
Avg: 4.053708648s
p50: 4.2203s
p90: 7.989635s
p99: 9.420722s
==========================
Kubernetes 1.32
=== Benchmark Summary ===
Total Duration: 19.707921375s
Scenario: burst
Replicas: 200
PVC Size: 100Mi
Kubernetes Version: v1.32.0
PVC Poll Interval: 100ms
PVC Delete Latency:
Count: 200
Avg: 64.513659ms
p50: 42ns
p90: 84ns
p99: 1.587592s
==========================
Reviewing results
Console summary
After each run, the tool prints a summary of recorded PVC deletion latencies:
=== Benchmark Summary ===
Total Duration: 45s
Scenario: burst
Replicas: 100
PVC Size: 100Mi
Kubernetes Version: v1.30.11
PVC Poll Interval: 500ms
PVC Delete Latency:
Count: 100
Avg: 2.3s
p50: 1.8s
p90: 4.1s
p99: 5.8s
==========================
Metrics review (Grafana)
- PVC Delete Latency: look for spikes in p99 latency during scale-down.
- Controller Workqueue Depth: high depth indicates the PVC protection controller is falling behind.
- LIST Pods QPS: the controller performs
LIST podsfrequently. Watch for spikes during scale-down. - API Server Latency: high LIST latency suggests the API server is struggling under the load.
You can also validate that the metrics endpoint is live:
curl http://localhost:8080/metrics
Safety guards
- The tool will only run if your current kubectl context is
minikube. This prevents accidental execution on production clusters. - High QPS/Burst settings for the K8s client are configurable via flags (
--client-qps,--client-burst).
Direct kubectl cleanup
You can also clean up manually by deleting the tool’s namespaces:
kubectl delete namespace pvcbench-<timestamp>
Possible issues and solutions
If you want to confirm the PVC protection finalizer is present during deletions:
kubectl get pvc -n pvcbench-<timestamp> -o jsonpath='{.items[0].metadata.name}{"\n"}'
kubectl get pvc <pvc-name> -n pvcbench-<timestamp> -o jsonpath='{.metadata.finalizers}{"\n"}'