Skip to main content
Piraeus is a production-grade storage solution built on and . It replicates data across multiple , so if a node fails, your applications continue running on another node with a complete copy of their data.

When to use

  • Production workloads — Databases, message queues, and any application where data loss is unacceptable
  • Multi-node clusters — Two or more worker nodes for meaningful replication
  • High availability — Applications that need to survive node failures without downtime

How it works

Piraeus runs a storage controller and satellite agents on your worker nodes. When an application requests a volume:
  1. LINSTOR creates a and replicates it across multiple nodes
  2. DRBD keeps replicas synchronized in real time at the kernel level
  3. If the primary node fails, Kubernetes reschedules the to a node with an up-to-date replica
  4. The application resumes with all its data intact — no manual recovery needed
Key characteristics:
  • Configurable replication — synchronous by default, with asynchronous options for performance-sensitive workloads
  • Automatic failover — pods reschedule to healthy nodes with existing replicas
  • Volume snapshots and cloning for backup and testing
  • — storage is allocated on demand, not reserved upfront

Replication modes

DRBD supports three replication protocols that control when a write is considered complete:
ProtocolModeWrite confirmed whenBest for
Protocol CSynchronousLocal and remote disk writes completeMost workloads — strongest data safety (default)
Protocol BSemi-synchronousLocal disk write completes and data reaches the peer node’s memoryFaster writes with near-synchronous safety
Protocol AAsynchronousLocal disk write completes and data is placed in the send bufferMaximum write performance — useful for long-distance replication or disaster recovery
Protocol C (synchronous) is the default and the right choice for most production workloads. Every write is confirmed on both nodes before your application proceeds, so no data is lost if a node fails. Protocol A (asynchronous) gives you significantly faster write performance because the application doesn’t wait for the remote node to confirm. The trade-off is that a small window of recent writes could be lost if the primary node fails before the data reaches the replica. This mode is commonly used for disaster recovery across data centers where the network latency of synchronous replication would be too high.

Deploy

Navigate to your cluster’s Storage tab, click Deploy Storage, and select Piraeus (LINSTOR). Deployment takes a few minutes as Piraeus installs its controller and satellite components on each worker node. After deployment, a default is created. Any application requesting a will get a replicated volume distributed across your nodes.

Worker node storage

Piraeus uses the local disks on your worker nodes for storage. When choosing servers for a cluster with Piraeus:
  • Disk space — Each node contributes its local disk to the storage pool. More disk space per node means more total storage capacity.
  • Disk type — NVMe SSDs provide the best performance for database workloads. Standard SSDs work well for most applications.
  • Number of nodes — At least two nodes for meaningful replication, three for full redundancy (data survives any single node failure).
Using your worker nodes’ local disks is the most straightforward and cost-effective approach. You avoid provisioning separate cloud volumes, and Piraeus handles replication automatically. This works well for most workloads — you get production-grade durability without extra infrastructure costs.

Cloud provider CSI drivers

If your worker nodes run on a cloud provider with its own block storage (Hetzner Cloud Volumes, AWS EBS, etc.), you have a choice:
  • Piraeus with local disks — Simpler setup, no cloud-specific dependencies, replication handled by DRBD. Your data is replicated across nodes regardless of the underlying cloud provider.
  • Cloud — Uses the cloud provider’s managed block storage. The cloud provider handles durability and snapshots, but you pay per volume and are tied to that provider.
Both approaches are valid. Piraeus with local disks is provider-agnostic and keeps your infrastructure portable. A cloud CSI driver may be preferable if you want cloud-managed snapshots or your provider offers specialized storage tiers.
CNAP’s built-in Piraeus deployment works on any cluster — managed or imported, any cloud provider or bare metal. You don’t need to choose a storage strategy based on your infrastructure provider.