Skip to main content
Knowledge Bases are first-class collections for reference material that should stay stable. While conversational memory evolves with every session, a Knowledge Base holds content you import once and query repeatedly — documentation, internal policies, runbooks, API references, or any source material your AI needs as a permanent foundation.

How Knowledge Bases differ from memory

  • Written automatically from conversation turns
  • Evolves continuously as you work
  • Can be edited, archived, or deleted
  • Reflects your current knowledge and history
  • Searched by the Context Engine on every turn
The two are complementary. Your agent can draw on what it has learned from your conversations and simultaneously look up the exact wording from a policy document or runbook — without the reference material being overwritten or merged with conversational context.

Use cases

Internal documentation

Import engineering docs, architecture guides, or onboarding materials so your AI can answer questions directly from source.

Policies and rules

Load compliance policies, coding standards, or security guidelines that your AI should always have available.

Runbooks

Store operational procedures and incident response playbooks for reliable, consistent retrieval.

Source material

Import external reference data — vendor docs, specifications, or exported content — for fast semantic lookup.

Scope-aware privacy

Knowledge Bases are isolated by tenant and scope. When you create a Knowledge Base, you can assign it a scope — a workspace, client identifier, or agent lane. Queries only return results from Knowledge Bases that match the caller’s tenant and scope. This means you can maintain separate Knowledge Bases for different clients or environments without any risk of cross-contamination. A query from one workspace never surfaces content from another.
GET /api/knowledge-bases?scope=client-acme

Importing content

There are two import paths depending on the volume of content you’re adding.
Use crystal_import_knowledge or POST /api/knowledge-bases/:id/import for normal ingest. Embedding and graph backfill are scheduled immediately after import.
# Via the API
curl -X POST https://memorycrystal.ai/api/knowledge-bases/kb_123/import \
  -H "Authorization: Bearer <key>" \
  -H "Content-Type: application/json" \
  -d '{
    "chunks": [
      { "content": "Deploy with npm run convex:deploy from the repo root." },
      { "content": "Never push directly to main without a passing CI run." }
    ]
  }'

Background enrichment

After any import — standard or bulk — Memory Crystal schedules embedding generation and graph backfill as background jobs. Large Knowledge Bases don’t require you to wait for the import request to complete enrichment. The content becomes queryable as chunks are processed, and graph connections are built progressively.
Freshly imported chunks may not be immediately available for semantic search until their embeddings are generated. For large imports, allow a few minutes for the background jobs to complete.

Querying a Knowledge Base

Use crystal_query_knowledge_base to search a specific Knowledge Base by ID. The tool returns relevant source chunks and reference answers based on your query.
crystal_query_knowledge_base
  knowledgeBaseId: kb_123
  query: "How do I deploy to production?"
You can also list available Knowledge Bases — including scoped and private collections — using crystal_list_knowledge_bases. For full usage details and available options, see the Knowledge Bases tools reference.