Table of Contents
How to Clone a PVC in Kubernetes
Summary: This wiki page shows how to clone a Persistent Volume Claim (PVC) in Kubernetes.
Date: 14 September 2025
When doing an Argo CD sync I got an error on one of our PVCs. For one of our applications we upgraded the storage class but hadn't had the time yet to convert the actual PVC:
one or more objects failed to apply, reason: PersistentVolumeClaim "seq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests and volumeAttributesClassName for bound claims core.PersistentVolumeClaimSpec{ ... // 2 identical fields Resources: {Requests: {s"storage": {i: {...}, Format: "BinarySI"}}}, VolumeName: "pvc-2bc75dce-01e6-4f8b-ae06-3fc6c6657dac", - StorageClassName: &"default", + StorageClassName: &"managed-premium", VolumeMode: &"Filesystem", DataSource: nil, ... // 2 identical fields }
As you can see, the storage class 'default' is changed to 'managed-premium', but unfortunately, this cannot be done online in Kubernetes, as that setting is immutable. Follow the procedure below to use 'korb' to quickly clone the PVC to a new one with the correct storage class.
Current Situation
This is the current pvc manifest:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: seq namespace: appops uid: 33dd11a4-e97e-4ce2-85e5-94efbad9e087 resourceVersion: '737538059' creationTimestamp: '2024-08-07T14:57:02Z' labels: app: seq chart: seq-2024.3.1 heritage: Helm k8slens-edit-resource-version: v1 release: seq annotations: argocd.argoproj.io/tracking-id: appops:/PersistentVolumeClaim:appops/seq pv.kubernetes.io/bind-completed: 'yes' pv.kubernetes.io/bound-by-controller: 'yes' volume.beta.kubernetes.io/storage-provisioner: disk.csi.azure.com volume.kubernetes.io/selected-node: aks-system-12344567-vmss000000 volume.kubernetes.io/storage-provisioner: disk.csi.azure.com finalizers: - kubernetes.io/pvc-protection selfLink: /api/v1/namespaces/appops/persistentvolumeclaims/seq status: phase: Bound accessModes: - ReadWriteOnce capacity: storage: 1Ti spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Ti volumeName: pvc-2bc75dce-01e6-4f8b-ae06-3fc6c6657dac storageClassName: default volumeMode: Filesystem
Note that the disk is 1TB in size. From the monitoring we see that it is 82% full.
Approach
There are several ways to clone a PVC, but I decided to create a quick clone using korb. Korb is a command-line tool to clone Kubernetes Persistent Volume Claims (PVCs) and migrate data between different storage classes. It supports various strategies for copying data, including snapshot-based and copy-twice methods.
Follow the steps below:
- Check the latest release from: the korb release page
- Use the following commands to install the korb binary on your linux system, and make sure to replace the version number if a newer one is available:
# Download the latest release and make it executable curl -LO https://github.com/BeryJu/korb/releases/download/v2.3.4/korb_2.3.4_linux_amd64.tar.gz tar -xvzf korb_2.3.4_linux_amd64.tar.gz sudo mv korb /usr/local/bin/korb # Scale the application with the pvc to 0 replicas kubectl scale deployment seq --replicas=0 -n appops # Start the clone using korb. Note that the container image parameter is only necessary when working on a private cluster korb --new-pvc-storage-class=managed-premium --strategy=copy-twice-name --new-pvc-namespace=appops --source-namespace=appops --container-image=acreuwprd.azurecr.io/docker/beryju/korb-mover:v2 seq # Once the clone is ready, scale the application back to 1 replica kubectl scale deployment seq --replicas=1 -n appops
Note the following:
- The korb command above was done on a private (Azure Kubernetes) cluster, so I had to add the image to the container registry before the command worked. You can do that with the following commands (replace the acr name with your own):
az login docker pull ghcr.io/beryju/korb-mover:v2 docker tag ghcr.io/beryju/korb-mover:v2 acreuwprd.azurecr.io/docker/beryju/korb-mover:v2 docker push acreuwprd.azurecr.io/docker/beryju/korb-mover:v2
- The strategy 'copy-twice-name' means that the pvc will first be cloned to a temporary pvc with a temporary name, and then right away to the final pvc with the original name. This works best in an environment with Argo CD which tracks the pvc by name.
- The clone process from the standard storage class to the managed-premium storage class took 3 hours and 20 minutes for a 1 TB pvc that was 82% full. The second clone (to the final pvc with the original name) was some faster, but still took a long time. Keep that in mind when planning a maintenance window.
Result
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: seq namespace: appops uid: 1946a79c-37aa-4f01-8ce0-ed090d2b9b67 resourceVersion: '743857155' creationTimestamp: '2025-09-10T22:24:19Z' labels: app: seq chart: seq-2024.3.1 heritage: Helm k8slens-edit-resource-version: v1 release: seq annotations: argocd.argoproj.io/tracking-id: appops:/PersistentVolumeClaim:appops/seq pv.kubernetes.io/bind-completed: 'yes' pv.kubernetes.io/bound-by-controller: 'yes' volume.beta.kubernetes.io/storage-provisioner: disk.csi.azure.com volume.kubernetes.io/selected-node: aks-system-12344567-vmss000000 volume.kubernetes.io/storage-provisioner: disk.csi.azure.com finalizers: - kubernetes.io/pvc-protection selfLink: /api/v1/namespaces/appops/persistentvolumeclaims/seq status: phase: Bound accessModes: - ReadWriteOnce capacity: storage: 1Ti spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Ti volumeName: pvc-1946a79c-37aa-4f01-8ce0-ed090d2b9b67 storageClassName: managed-premium volumeMode: Filesystem