When to use
- Production workloads — Databases, message queues, and any application where data loss is unacceptable
- Multi-node clusters — Two or more worker nodes for meaningful replication
- High availability — Applications that need to survive node failures without downtime
How it works
Piraeus runs a storage controller and satellite agents on your worker nodes. When an application requests a volume:- LINSTOR creates a and replicates it across multiple nodes
- DRBD keeps replicas synchronized in real time at the kernel level
- If the primary node fails, Kubernetes reschedules the to a node with an up-to-date replica
- The application resumes with all its data intact — no manual recovery needed
- Configurable replication — synchronous by default, with asynchronous options for performance-sensitive workloads
- Automatic failover — pods reschedule to healthy nodes with existing replicas
- Volume snapshots and cloning for backup and testing
- — storage is allocated on demand, not reserved upfront
Replication modes
DRBD supports three replication protocols that control when a write is considered complete:| Protocol | Mode | Write confirmed when | Best for |
|---|---|---|---|
| Protocol C | Synchronous | Local and remote disk writes complete | Most workloads — strongest data safety (default) |
| Protocol B | Semi-synchronous | Local disk write completes and data reaches the peer node’s memory | Faster writes with near-synchronous safety |
| Protocol A | Asynchronous | Local disk write completes and data is placed in the send buffer | Maximum write performance — useful for long-distance replication or disaster recovery |
Deploy
Navigate to your cluster’s Storage tab, click Deploy Storage, and select Piraeus (LINSTOR). Deployment takes a few minutes as Piraeus installs its controller and satellite components on each worker node. After deployment, a default is created. Any application requesting a will get a replicated volume distributed across your nodes.Worker node storage
Piraeus uses the local disks on your worker nodes for storage. When choosing servers for a cluster with Piraeus:- Disk space — Each node contributes its local disk to the storage pool. More disk space per node means more total storage capacity.
- Disk type — NVMe SSDs provide the best performance for database workloads. Standard SSDs work well for most applications.
- Number of nodes — At least two nodes for meaningful replication, three for full redundancy (data survives any single node failure).
Cloud provider CSI drivers
If your worker nodes run on a cloud provider with its own block storage (Hetzner Cloud Volumes, AWS EBS, etc.), you have a choice:- Piraeus with local disks — Simpler setup, no cloud-specific dependencies, replication handled by DRBD. Your data is replicated across nodes regardless of the underlying cloud provider.
- Cloud — Uses the cloud provider’s managed block storage. The cloud provider handles durability and snapshots, but you pay per volume and are tied to that provider.
CNAP’s built-in Piraeus deployment works on any cluster — managed or imported, any cloud provider or bare metal. You don’t need to choose a storage strategy based on your infrastructure provider.
Related topics
- Local Path Provisioner → — Direct local disk storage
- Storage overview → — Understanding storage in CNAP
- Add workers → — Connect compute capacity to your cluster
- Piraeus Operator on GitHub → — Project source and documentation
- DRBD User’s Guide → — DRBD replication protocol details