Skip to main content
An agent asked to “build me a production dashboard for my SaaS” produced a complete operational overview — Postgres table sizes, Redis memory, recent application errors, and cluster health — all as live auto-refreshing widgets. One conversation, zero config. This is the kind of dashboard that normally means stitching together Grafana panels, writing SQL by hand, configuring Prometheus exporters, and wiring up log aggregation. Here, it’s a conversation.

The Prompt

“I’m running a SaaS app on CNAP — Postgres for data, Redis for caching. Create a dashboard that shows me database table sizes, Redis memory, recent app errors from logs, and overall cluster health. I want to open this every morning and know if anything needs attention.”
The agent creates four snippets and assembles them into a dashboard.

Widget 1 — Postgres Table Sizes (Table)

The agent uses exec to run psql inside the Postgres container and query pg_stat_user_tables:
async () => {
  const clusterId = "cls_abc123"; // resolved from conversation

  const res = await cnap.request({
    method: "POST",
    path: `/v1/clusters/${clusterId}/exec`,
    body: {
      namespace: "production",
      pod: "postgres-0",
      container: "postgresql",
      command: ["psql", "-U", "postgres", "-t", "-A", "-F", "|", "-c", `
        SELECT tablename,
               pg_size_pretty(pg_total_relation_size('public.' || quote_ident(tablename))) AS total,
               pg_size_pretty(pg_indexes_size(quote_ident(tablename))) AS indexes,
               n_live_tup AS rows,
               CASE WHEN n_dead_tup > n_live_tup * 0.1
                    THEN 'VACUUM NEEDED' ELSE 'ok' END AS health
        FROM pg_stat_user_tables
        ORDER BY pg_total_relation_size('public.' || quote_ident(tablename)) DESC
        LIMIT 10;
      `]
    }
  });

  return res.body.stdout.trim().split("\n").map(line => {
    const [table, total, indexes, rows, health] = line.split("|");
    return { table, total, indexes, rows: parseInt(rows), health };
  });
}
Displays as a table with columns: table name, total size, index size, row count, and a health flag that warns when dead tuples exceed 10% — meaning the table needs vacuuming.

Widget 2 — Redis Memory (Stat)

Runs redis-cli inside the Redis pod and extracts the key metric:
async () => {
  const clusterId = "cls_abc123";

  const res = await cnap.request({
    method: "POST",
    path: `/v1/clusters/${clusterId}/exec`,
    body: {
      namespace: "production",
      pod: "redis-0",
      container: "redis",
      command: ["redis-cli", "INFO", "memory"]
    }
  });

  const lines = res.body.stdout.split("\n");
  const used = lines.find(l => l.startsWith("used_memory_human:"))?.split(":")[1]?.trim();
  const peak = lines.find(l => l.startsWith("used_memory_peak_human:"))?.split(":")[1]?.trim();
  const frag = lines.find(l => l.startsWith("mem_fragmentation_ratio:"))?.split(":")[1]?.trim();

  return `${used} used / ${peak} peak (frag: ${frag})`;
}
Displays as a single stat: 48.23M used / 52.10M peak (frag: 1.12). At a glance: memory is healthy, fragmentation is low.

Widget 3 — Recent App Errors (Logs)

Fetches container logs from the application pod and filters for errors:
async () => {
  const clusterId = "cls_abc123";

  const kube = (path) => cnap.request({
    method: "GET",
    path: `/v1/clusters/${clusterId}/kube/${path}`,
  }).then(r => r.body);

  // Find the app pods
  const pods = await kube("api/v1/namespaces/production/pods?labelSelector=app=web");
  const lines = [];

  for (const pod of pods.items.slice(0, 3)) {
    const logs = await kube(
      `api/v1/namespaces/production/pods/${pod.metadata.name}/log?tailLines=500`
    );
    const errors = logs.split("\n").filter(l =>
      /error|exception|fatal|panic|timeout|5\d{2}/i.test(l)
    );
    lines.push(`--- ${pod.metadata.name} ---`);
    lines.push(...(errors.length > 0 ? errors.slice(-10) : ["No recent errors"]));
  }

  return lines.join("\n");
}
Displays as a log viewer — syntax-highlighted output showing the last error lines from each app pod. Opens your morning with either a clean “No recent errors” or the exact lines that need attention.

Widget 4 — Cluster Health (Table)

Queries the CNAP API for cluster status — no exec needed, this is platform data:
async () => {
  const [clusters, installs] = await Promise.all([
    cnap.request({ method: "GET", path: "/v1/clusters" }).then(r => r.body.data),
    cnap.request({ method: "GET", path: "/v1/installs" }).then(r => r.body.data),
  ]);

  return clusters.map(c => {
    const clusterInstalls = installs.filter(i => i.clusterId === c.id);
    const healthy = clusterInstalls.filter(i => i.status === "Healthy").length;
    return {
      cluster: c.name,
      status: c.kaas?.status ?? "imported",
      version: c.kaas?.version ?? "—",
      installs: `${healthy}/${clusterInstalls.length} healthy`,
    };
  });
}
Shows each cluster with its status, KaaS version, and how many of its installs are healthy — the infrastructure-level complement to the app-level widgets above.

The Result

Four widgets, one dashboard, auto-refreshes on load:
WidgetTypeData sourceWhat it shows
Postgres Table SizesTableexecpsqlTop 10 tables with sizes, row counts, vacuum health
Redis MemoryStatexecredis-cliMemory usage, peak, fragmentation ratio
Recent App ErrorsLogsKubernetes log APILast error lines from each app pod
Cluster HealthTableCNAP APICluster status with install health counts
Open it every morning. If all four widgets look green, grab your coffee. If something’s off — a table needs vacuuming, Redis fragmentation is high, error logs are spiking, or an install failed — you see it immediately.

Why This Matters

This dashboard combines four different data sources — a SQL database, an in-memory cache, container logs, and the CNAP platform API — into a single view. Normally, this means:
  • Grafana with Prometheus exporters for Redis and Postgres metrics
  • A log aggregation pipeline (Loki, ELK) for application errors
  • A separate infrastructure dashboard for cluster health
  • Hours of configuration, and ongoing maintenance for all of it
Here, the agent wrote four JavaScript functions in a single conversation. The snippets are disposable — if you want to add a column, change the SQL query, or filter logs differently, just tell the agent. No YAML, no exporters, no pipeline configuration. And because snippets can call any CNAP API endpoint — including exec into any container — the same pattern works for any application: MongoDB stats, RabbitMQ queue depths, Nginx traffic analysis, Elasticsearch cluster health, or whatever runs in your pods.

Build your own

Read the dashboard generation guide to learn the workflow, then connect your agent and describe the dashboard you want.