Skip to main content
An agent asked to “audit the cluster for security” ran 9 different security checks across 7 parallel Kubernetes API calls — pods, services, secrets, RBAC roles, RBAC bindings, network policies, and nodes — all in a single execute call.

What It Checks

CheckWhat it looks for
Pod security contextPrivileged containers, running as root, privilege escalation, writable root filesystem, capabilities not dropped
Image tagsContainers using :latest instead of pinned tags
Resource limitsMissing requests or limits
Exposed servicesNodePort or LoadBalancer services directly accessible
RBAC — cluster-adminNon-system bindings to the cluster-admin role
RBAC — wildcardsRoles with * on resources, verbs, or API groups
Network policiesNamespaces with no network policies (unrestricted pod-to-pod traffic)
Host namespacesPods using hostNetwork, hostPID, or hostIPC
Service account tokensNon-system pods with automounted service account tokens

The Code

async () => {
  const clusterId = "cls_abc123"; // resolved by the agent from conversation

  const kube = (path) => cnap.request({
    method: "GET",
    path: `/v1/clusters/${clusterId}/kube/${path}`,
  }).then(r => r.body);

  // 7 API calls in parallel
  const [pods, services, secrets, clusterRoles, clusterRoleBindings, networkPolicies, nodes] =
    await Promise.all([
      kube("api/v1/pods"),
      kube("api/v1/services"),
      kube("api/v1/secrets"),
      kube("apis/rbac.authorization.k8s.io/v1/clusterroles"),
      kube("apis/rbac.authorization.k8s.io/v1/clusterrolebindings"),
      kube("apis/networking.k8s.io/v1/networkpolicies"),
      kube("api/v1/nodes"),
    ]);

  // 1. Pod security audit
  const podIssues = [];
  for (const pod of pods.items) {
    for (const c of pod.spec.containers) {
      const sc = c.securityContext || {};
      const issues = [];
      if (sc.privileged) issues.push("privileged");
      if (sc.runAsUser === 0 || (!sc.runAsNonRoot && !pod.spec.securityContext?.runAsNonRoot))
        issues.push("may run as root");
      if (sc.allowPrivilegeEscalation !== false) issues.push("privilege escalation allowed");
      if (!sc.readOnlyRootFilesystem) issues.push("writable root filesystem");
      if (!sc.capabilities?.drop?.includes("ALL")) issues.push("capabilities not dropped");
      if (c.image?.includes(":latest")) issues.push("uses :latest tag");
      if (!c.resources?.limits) issues.push("no resource limits");
      if (!c.resources?.requests) issues.push("no resource requests");
      if (issues.length > 0) {
        podIssues.push({ pod: pod.metadata.name, ns: pod.metadata.namespace, container: c.name, issues });
      }
    }
  }

  // 2. Secrets in default namespace
  const defaultSecrets = secrets.items
    .filter(s => s.metadata.namespace === "default" && s.type !== "kubernetes.io/service-account-token")
    .map(s => ({ name: s.metadata.name, type: s.type }));

  // 3. Services exposed externally
  const exposedServices = services.items
    .filter(s => s.spec.type === "NodePort" || s.spec.type === "LoadBalancer")
    .map(s => ({ name: s.metadata.name, ns: s.metadata.namespace, type: s.spec.type }));

  // 4. Overly permissive RBAC
  const dangerousBindings = clusterRoleBindings.items
    .filter(b => b.roleRef.name === "cluster-admin")
    .map(b => ({
      name: b.metadata.name,
      subjects: b.subjects?.map(s => `${s.kind}:${s.namespace || ""}/${s.name}`),
    }));

  // 5. Wildcard RBAC rules
  const wildcardRoles = clusterRoles.items
    .filter(r => r.rules?.some(rule =>
      rule.resources?.includes("*") || rule.verbs?.includes("*") || rule.apiGroups?.includes("*")
    ))
    .map(r => r.metadata.name);

  // 6. Network policies per namespace
  const allNs = [...new Set(pods.items.map(p => p.metadata.namespace))];
  const npByNs = {};
  for (const np of networkPolicies.items) {
    npByNs[np.metadata.namespace] = (npByNs[np.metadata.namespace] || 0) + 1;
  }
  const nsWithoutNetpol = allNs.filter(ns => !npByNs[ns]);

  // 7. Host namespaces
  const hostNsPods = pods.items
    .filter(p => p.spec.hostNetwork || p.spec.hostPID || p.spec.hostIPC)
    .map(p => ({ pod: p.metadata.name, ns: p.metadata.namespace }));

  // 8. SA token automount
  const automountPods = pods.items
    .filter(p => p.spec.automountServiceAccountToken !== false && !p.metadata.namespace.startsWith("kube-"))
    .map(p => ({ pod: p.metadata.name, ns: p.metadata.namespace, sa: p.spec.serviceAccountName }));

  // 9. Node info
  const nodeInfo = nodes.items.map(n => ({
    name: n.metadata.name,
    kubelet: n.status.nodeInfo.kubeletVersion,
    os: n.status.nodeInfo.osImage,
    containerRuntime: n.status.nodeInfo.containerRuntimeVersion,
  }));

  return {
    summary: {
      total_pods: pods.items.length,
      total_containers: pods.items.reduce((s, p) => s + p.spec.containers.length, 0),
      containers_with_issues: podIssues.length,
    },
    pod_security: podIssues,
    host_namespace_pods: hostNsPods,
    exposed_services: exposedServices,
    cluster_admin_bindings: dangerousBindings,
    wildcard_rbac_roles: wildcardRoles,
    namespaces_without_network_policies: nsWithoutNetpol,
    sa_automount_tokens: automountPods,
    non_default_secrets: defaultSecrets,
    node_info: nodeInfo,
  };
}

Why This Matters

A traditional approach would require the agent to make 7+ sequential tool calls just to fetch the data, then somehow reason about all the raw JSON in its context window. The context would be flooded with every pod spec, every RBAC rule, every service definition. Code Mode runs all 7 fetches in parallel and does the analysis inside the sandbox. The agent wrote the security checks — privileged containers, missing capabilities, writable filesystems, RBAC wildcards, network policy gaps — as JavaScript logic. The LLM receives a structured findings report, not raw Kubernetes API responses.

What the Agent Does

  1. Fires 7 Kubernetes API calls in parallel (Promise.all)
  2. Audits every container’s security context (8 checks per container)
  3. Identifies overly permissive RBAC bindings and wildcard roles
  4. Finds namespaces with no network policies
  5. Flags pods using host namespaces or automounted service account tokens
  6. Collects node runtime info for version auditing
  7. Returns a structured report with findings grouped by severity

From Audit to Remediation

The agent doesn’t just report findings — it can fix them. After the audit flagged zero network policies across all namespaces, a follow-up “create network policies for my installs” produced two more execute calls: Step 1 — Discovery: fetched services and pods from each install namespace in parallel to learn which ports each app needs. Step 2 — Apply policies: created 3 network policies per namespace using POST through the kube proxy:
async () => {
  const clusterId = "cls_abc123"; // resolved by the agent from conversation

  const kubePost = (path, body) => cnap.request({
    method: "POST",
    path: `/v1/clusters/${clusterId}/kube/${path}`,
    body,
  }).then(r => ({ status: r.status, name: r.body?.metadata?.name }));

  const namespaces = [
    { ns: "pde-k17236...", name: "cloudflare-gateway", ports: [] },
    { ns: "pde-k1737k...", name: "openclaw2", ports: [18789] },
    { ns: "pde-k174wk...", name: "httpbin", ports: [8000] },
    { ns: "pde-k175jm...", name: "openclaw", ports: [18789] },
  ];

  const results = [];
  for (const { ns, name, ports } of namespaces) {
    const apiPath = `apis/networking.k8s.io/v1/namespaces/${ns}/networkpolicies`;

    // 1. Default deny all ingress + egress
    const defaultDeny = await kubePost(apiPath, {
      apiVersion: "networking.k8s.io/v1",
      kind: "NetworkPolicy",
      metadata: { name: "default-deny-all", namespace: ns },
      spec: { podSelector: {}, policyTypes: ["Ingress", "Egress"] },
    });

    // 2. Allow DNS egress to kube-system
    const allowDns = await kubePost(apiPath, {
      apiVersion: "networking.k8s.io/v1",
      kind: "NetworkPolicy",
      metadata: { name: "allow-dns", namespace: ns },
      spec: {
        podSelector: {},
        policyTypes: ["Egress"],
        egress: [{
          to: [{ namespaceSelector: { matchLabels: { "kubernetes.io/metadata.name": "kube-system" } } }],
          ports: [{ protocol: "UDP", port: 53 }, { protocol: "TCP", port: 53 }],
        }],
      },
    });

    // 3. Allow ingress on app ports (if any)
    let allowIngress = null;
    if (ports.length > 0) {
      allowIngress = await kubePost(apiPath, {
        apiVersion: "networking.k8s.io/v1",
        kind: "NetworkPolicy",
        metadata: { name: "allow-app-ingress", namespace: ns },
        spec: {
          podSelector: {},
          policyTypes: ["Ingress"],
          ingress: [{ ports: ports.map(p => ({ protocol: "TCP", port: p })) }],
        },
      });
    }

    results.push({ install: name, defaultDeny, allowDns, allowIngress });
  }
  return results;
}
The result: 3 policies per namespace, all returning 201 Created:
Installdefault-deny-allallow-dnsallow-app-ingress
cloudflare-gateway— (no ports)
openclaw2✓ (TCP 18789)
httpbin✓ (TCP 8000)
openclaw✓ (TCP 18789)
Each namespace is now isolated — pods can only receive traffic on their app port and make DNS queries. No cross-namespace traffic. The agent went from finding the problem to fixing it without leaving the conversation.