cnap.request() sandbox used by Code Mode. This makes them a natural fit for AI-driven generation: your agent already knows how to write this code.
Why Use an Agent
Snippet code is an implementation detail, not something you hand-craft. The JavaScript is simple, repetitive, and easy to regenerate — treat it as disposable. Instead of learning API endpoints and debugging code yourself, describe what you want and let your agent handle it:| You say | The agent does |
|---|---|
| ”Create a dashboard showing cluster health” | Writes snippets for cluster status, KaaS version, and error counts — creates the dashboard with all widgets |
| ”Add a stat widget for total installs” | Creates a snippet that counts installs, adds it to the dashboard |
| ”Make the cluster table show regions too” | Updates the snippet code to join cluster and region data |
| ”The logs widget is too wide, make it half” | Adjusts the widget’s column span from 4 to 2 |
Workflow
Describe what you want
Tell your agent what data matters to you. Be specific about the use case, not the implementation:“I want a dashboard that shows how many clusters I have, which ones are healthy, and the last 50 lines of logs from my production install.”
The agent writes and tests the snippets
The agent writes the JavaScript for each widget — and can test it immediately. Since snippets use the same
cnap.request() sandbox as Code Mode, the agent test-runs the code via execute_code, sees errors or unexpected results, fixes them, and only saves the working version as a snippet. You never see the debugging — just the finished widget.Iterate conversationally
Review the result and refine. The agent can update snippet code, change display types, resize widgets, reorder the layout, or add new widgets — all from natural language.“Split the cluster table into two widgets — one for KaaS clusters and one for imported.”
What Agents Can Generate
Agents can create any snippet that the CNAP API supports. Common patterns:- Resource tables — List clusters, installs, products, or regions with selected columns
- KPI stats — Count resources, calculate ratios, show status values
- Health reports — Multi-step queries that check pod status, fetch logs, and summarize issues
- Audit views — Cross-reference installs with clusters and products for compliance or inventory
- Log viewers — Fetch and format container logs from running pods
Beyond Platform Data — Application-Level Dashboards
Here’s where it gets powerful. Snippets can use the exec endpoint to run commands inside your running containers. That means dashboards aren’t limited to CNAP platform data — they can reach into your actual applications. Ask your agent to build widgets like these:| Widget idea | What the snippet does |
|---|---|
| Postgres table sizes | Runs psql via exec to query pg_stat_user_tables — shows table names, row counts, and disk usage |
| Redis memory breakdown | Runs redis-cli INFO memory inside the Redis pod — displays used memory, peak memory, fragmentation ratio |
| MongoDB collection stats | Runs mongosh --eval to list collections with document counts and storage sizes |
| Application health check | Hits an internal /healthz or /metrics endpoint via curl inside the pod |
| Queue depth | Queries RabbitMQ or Celery via CLI to show pending/processing/failed job counts |
| Nginx access stats | Parses recent access logs with awk to show top endpoints, status code distribution, and request rates |
| SSL certificate expiry | Runs openssl to check certificate dates across services — flags anything expiring soon |
| Disk usage per PVC | Runs df -h inside pods to show persistent volume usage before you run out of space |
Tips
- Let the agent debug for you — If a widget shows an error, paste it into the conversation. The agent can test-run the snippet code via Code Mode, see exactly what failed, fix it, and update the snippet — all without you touching JavaScript.
- Start broad, then refine — Ask for a general dashboard first, then tweak individual widgets
- Don’t optimize the JavaScript — If a snippet works, it’s good enough. Regenerate rather than debug.
- Use descriptive names — Tell the agent to name snippets clearly (e.g. “Cluster Health by Region” not “query1”) so they’re easy to reuse across dashboards
- Combine with Code Mode — Use Code Mode for one-off investigations, then save the useful queries as snippets for ongoing monitoring
Related Topics
- Custom Dashboards → — Manual dashboard and snippet creation
- Code Mode in Action → — See how the same sandbox powers AI agent operations
- Platform MCP → — Connect AI agents to CNAP