Skip to main content
Local Path Provisioner uses the worker node’s local disk to create . It’s the simplest way to add storage to a cluster — no external dependencies, no configuration, and fast I/O since data lives on the same machine as your application.

When to use

  • Development and testing — Get persistent storage running in seconds
  • Single-node clusters — Local testing with Multipass or a single production server
  • Latency-sensitive production workloads — Databases on NVMe SSDs where direct disk access matters and the application handles its own replication (for example, PostgreSQL with streaming replication or MongoDB replica sets)
  • Simple production setups — Single-node servers where replication isn’t needed
Local Path Provisioner stores data on a single node with no replication. If the node fails or is replaced, the data is lost. For production workloads that need storage-level durability, use Piraeus (LINSTOR) instead — unless your application handles replication itself.

How it works

When an application requests storage, Local Path Provisioner creates a directory on the node’s local filesystem and mounts it into the . The volume is tied to that specific node — if Kubernetes needs to reschedule the pod, it schedules it back to the same node so it can access its data. Key characteristics:
  • One volume per
  • Pods using local storage are pinned to the node where the volume was created
  • No network overhead — reads and writes go directly to local disk, providing the lowest possible latency

Deploy

Navigate to your cluster’s Storage tab, click Deploy Storage, and select Local Path Provisioner. The provider deploys in under a minute and creates a default automatically. After deployment, any application that requests a PersistentVolumeClaim will get a local volume provisioned on the node where the pod is scheduled.

Limitations

  • Single-node only — Data lives on one node and is not replicated
  • No failover — If the node goes down, pods using local storage can’t be rescheduled elsewhere until the node recovers
  • No snapshots — Volume snapshots are not supported
  • No volume size enforcement — Requested storage sizes in PersistentVolumeClaims are not enforced. Since each volume is just a directory on the node’s filesystem, it can grow to use all available disk space regardless of the size you request (GitHub #107).
  • No volume expansion — Resizing a PVC after creation is not supported. Attempting to expand a volume causes the controller to log repeated errors because the provisioner has no resize capability (GitHub #190).
  • — Pods are pinned to the node where their volume was created

Local storage in production

Local Path Provisioner can be a valid production choice when your priority is raw disk performance and your application handles data replication at the application level. Databases like PostgreSQL, MySQL, and MongoDB have built-in replication that keeps copies of data across multiple instances — they don’t need the storage layer to replicate for them. In these cases, local storage on NVMe SSDs gives you the lowest latency without the overhead of network-based replication. The trade-off is operational: you’re responsible for ensuring your application’s replication is configured correctly. If it isn’t, a node failure means data loss.

Moving to replicated storage

When you want storage-level replication instead of managing it at the application layer:
  1. Deploy Piraeus (LINSTOR) for replicated storage across nodes
  2. Migrate your data by backing up and restoring from the application level (database dumps, file exports)
  3. Remove Local Path Provisioner once all workloads have moved
You can run both storage providers simultaneously during migration. Set Piraeus as the default StorageClass, and new volumes will automatically use replicated storage while existing local volumes continue to work.