Skip to main content
Kubernetes clusters often have Custom Resource Definitions (CRDs) installed by operators, service meshes, and platform tools. These are APIs the agent has never seen in its training data. Through the kube proxy, it can discover them, read their schemas, and interact with them — no pre-built tools required. There are two ways to discover CRDs, each useful for different things.

Approach 1: CRD API

The CRD API lists every custom resource definition installed in the cluster — names, groups, versions, and scope:
async () => {
  const clusterId = "cls_abc123"; // resolved by the agent from conversation

  const crds = await cnap.request({
    method: "GET",
    path: `/v1/clusters/${clusterId}/kube/apis/apiextensions.k8s.io/v1/customresourcedefinitions`,
  }).then(r => r.body);

  return crds.items.map(c => ({
    name: c.metadata.name,
    group: c.spec.group,
    kind: c.spec.names.kind,
    scope: c.spec.scope,
    versions: c.spec.versions.map(v => v.name),
    created: c.metadata.creationTimestamp,
  }));
}
The agent can then drill into a specific CRD to read its embedded structural schema:
async () => {
  const clusterId = "cls_abc123"; // resolved by the agent from conversation

  const crd = await cnap.request({
    method: "GET",
    path: `/v1/clusters/${clusterId}/kube/apis/apiextensions.k8s.io/v1/customresourcedefinitions/ciliumnetworkpolicies.cilium.io`,
  }).then(r => r.body);

  // Extract the schema from the stored version
  const version = crd.spec.versions.find(v => v.served && v.storage);
  return {
    kind: crd.spec.names.kind,
    group: crd.spec.group,
    version: version?.name,
    schema: version?.schema?.openAPIV3Schema?.properties?.spec,
  };
}
Best for: “What CRDs are installed?” — quick inventory, metadata, and embedded schemas.

Approach 2: Kubernetes OpenAPI Spec

Since Kubernetes 1.15+, CRDs with structural schemas are included in the cluster’s OpenAPI spec. The agent can fetch the full API surface — including custom resources — as proper OpenAPI paths:
async () => {
  const clusterId = "cls_abc123"; // resolved by the agent from conversation

  const kube = (path) => cnap.request({
    method: "GET",
    path: `/v1/clusters/${clusterId}/kube/${path}`,
  }).then(r => r.body);

  // /openapi/v3 returns an index of all API group paths
  const index = await kube("openapi/v3");

  // Each CRD group has its own path, e.g. "apis/cilium.io/v2"
  // Fetch a specific group to get full REST paths and schemas
  const cilium = await kube("openapi/v3/apis/cilium.io/v2");

  return {
    available_groups: Object.keys(index.paths || {}),
    cilium_paths: Object.keys(cilium.paths || {}),
  };
}
This gives the agent the actual REST endpoints (GET /apis/cilium.io/v2/namespaces/{ns}/ciliumnetworkpolicies) with full request/response schemas — the same format as the CNAP OpenAPI spec it already knows how to query with search. Best for: “How do I call this custom API?” — full REST paths, request bodies, and response schemas.
Use the CRD API for discovery (“what’s installed?”) and the OpenAPI spec for understanding (“how do I use it?”). An agent typically starts with the CRD API to find what’s interesting, then fetches the OpenAPI group for the full schema.

List Custom Resources

With the group, version, and resource name from the CRD, the agent queries instances:
async () => {
  const clusterId = "cls_abc123"; // resolved by the agent from conversation

  const kube = (path) => cnap.request({
    method: "GET",
    path: `/v1/clusters/${clusterId}/kube/${path}`,
  }).then(r => r.body);

  // List all CiliumNetworkPolicies across namespaces
  const policies = await kube("apis/cilium.io/v2/ciliumnetworkpolicies");

  return policies.items.map(p => ({
    name: p.metadata.name,
    namespace: p.metadata.namespace,
    endpoints: p.spec?.endpointSelector,
    ingress_rules: p.spec?.ingress?.length || 0,
    egress_rules: p.spec?.egress?.length || 0,
  }));
}

Full Discovery Flow

The agent can chain the whole thing — discover CRDs, pick one it hasn’t seen, learn its schema, and query instances:
async () => {
  const clusterId = "cls_abc123"; // resolved by the agent from conversation

  const kube = (path) => cnap.request({
    method: "GET",
    path: `/v1/clusters/${clusterId}/kube/${path}`,
  }).then(r => r.body);

  // 1. Discover all CRDs
  const crds = await kube("apis/apiextensions.k8s.io/v1/customresourcedefinitions");

  // 2. Group by API group
  const groups = {};
  for (const crd of crds.items) {
    const group = crd.spec.group;
    if (!groups[group]) groups[group] = [];
    groups[group].push({
      kind: crd.spec.names.kind,
      plural: crd.spec.names.plural,
      version: crd.spec.versions.find(v => v.served)?.name,
    });
  }

  // 3. For each group, count instances
  const summary = await Promise.all(
    Object.entries(groups).map(async ([group, resources]) => {
      const counts = await Promise.all(
        resources.map(async (r) => {
          try {
            const list = await kube(`apis/${group}/${r.version}/${r.plural}`);
            return { kind: r.kind, count: list.items?.length || 0 };
          } catch {
            return { kind: r.kind, count: 0, error: true };
          }
        })
      );
      return { group, resources: counts };
    })
  );

  return {
    total_crds: crds.items.length,
    groups: summary.filter(g => g.resources.some(r => r.count > 0)),
  };
}

Why This Matters

Traditional MCP servers need a pre-built tool for every API. When a cluster has CRDs from Cilium, cert-manager, Prometheus, Argo, or any other operator, those tools don’t exist. The agent is stuck. With Code Mode + the kube proxy, the agent is self-sufficient. It discovers what’s available, learns the schema, and interacts with custom resources — all at runtime, without any code changes to the MCP server. The API surface of the MCP grows automatically with whatever is installed in the cluster.