Skip to main content
Memory Crystal doesn’t search just one thing. Every query runs across both memory layers simultaneously, combines multiple scoring signals, and filters by channel scope. This page explains how the search system works and how to get the most out of it.

How Memory Crystal searches

When you call crystal_recall or crystal_search_messages, Memory Crystal runs a hybrid search:
  1. Embedding — your query is embedded into a vector using the configured embedding provider (OpenAI or Gemini).
  2. Vector search — the vector is matched against stored embeddings using cosine similarity.
  3. BM25 text search — a keyword-based search runs in parallel over the same data.
  4. Score fusion — vector and BM25 scores are combined using a weighted fusion formula.
  5. Re-ranking — results are re-scored using multiple signals: vector similarity, memory strength, freshness, access frequency, salience, conversational continuity, and text match quality.
  6. Diversity filter — near-duplicate results are removed to avoid surfacing the same fact multiple times with slight variations.
  7. Budget gating — results are trimmed to fit the model’s context window budget.
The result is a ranked list that reflects both semantic relevance and the accumulated weight of your memory history.

Two tools, two layers

crystal_recall searches long-term memory — the extracted, distilled facts, decisions, lessons, and rules that Memory Crystal has built from your conversation history.Use crystal_recall when you want:
  • Decisions made in past sessions
  • Lessons learned from past mistakes
  • Facts about your project, stack, or team
  • Rules and procedures
  • Anything that was extracted and stored as a durable memory
Results from crystal_recall are scored by memory strength, confidence, and how recently the memory has been accessed — not just semantic similarity.
crystal_recall({
  query: "authentication strategy",
  mode: "decision",
  limit: 8
})
The Context Engine runs both automatically before every response. You only need to call these tools explicitly when you want results injected into a specific response, or when your AI client doesn’t have the OpenClaw plugin managing recall automatically.

Legacy compatibility tools

Two additional tools exist for backwards compatibility with integrations that were built before the crystal_* tool surface was introduced:
ToolWhat it returns
memory_searchSearches LTM and returns crystal/<id>.md file paths that can be read with memory_get
memory_getReads a full memory by memoryId or by a crystal/<id>.md path
These tools are only available in the OpenClaw plugin. Prefer crystal_recall for new integrations — it returns richer structured data and supports all recall modes and filters.

Channel scoping

Channel scoping is how Memory Crystal keeps memories isolated by namespace. Every memory is stored with a channel identifier, and every recall query filters by that channel. This lets you run separate memory namespaces for different clients, projects, or agent types — all on the same account, with no data leaking between namespaces.

How scoping works

The active scope comes from one of three places, in priority order:
  1. channel parameter — passed directly on individual tool calls like crystal_recall({ query: "...", channel: "project:alpha" })
  2. crystal_set_scope — sets a session-level override that applies to all tool calls for the rest of the session (OpenClaw plugin only)
  3. channelScope plugin config — the default scope set in your plugin configuration; applies when no override is active
When a scope is active, only memories stored under that scope are returned. Memories stored without a scope (or with a different scope) are not surfaced.

Setting scope in the plugin

Configure a default scope in your OpenClaw plugin config:
{
  "plugins": {
    "entries": {
      "crystal-memory": {
        "enabled": true,
        "config": {
          "channelScope": "project:alpha"
        }
      }
    }
  }
}
Or override it for the current session:
crystal_set_scope({ scope: "client:acme" })

Setting scope in the MCP server

Pass channel on individual tool calls:
crystal_recall({
  query: "deployment procedure",
  channel: "project:alpha"
})

crystal_remember({
  store: "procedural",
  category: "workflow",
  title: "Deploy procedure for project alpha",
  content: "...",
  channel: "project:alpha"
})
Use a consistent naming convention for channel scopes — project:name, client:name, agent:name — so they’re easy to manage and filter in the dashboard.

Six recall modes

crystal_recall supports six recall modes that shape which memories surface and how they’re ranked. The Context Engine picks a mode automatically for automatic recall — you only need to specify a mode when calling crystal_recall directly.
The default mode. Broad recall across all stores and categories. Use for open-ended questions where you’re not sure what kind of memory you’re looking for.
crystal_recall({ query: "convex schema design" })

Temporal recall

Memory Crystal understands date-aware queries. You can use natural language date expressions in your queries and the system will resolve them:
crystal_recall({ query: "decisions made last week" })

crystal_recall({ query: "what happened before the March deployment" })

crystal_search_messages({
  query: "the staging issue",
  sinceMs: Date.now() - 7 * 24 * 60 * 60 * 1000  // since 7 days ago
})
The sinceMs parameter on crystal_search_messages and crystal_recent lets you filter to a specific time window using a Unix timestamp in milliseconds. For LTM recall, the temporal hybrid retrieval layer in the Context Engine handles date-aware candidate injection automatically.

Choosing the right tool

QuestionTool to use
”What do we know about X?”crystal_recall
”Why did we decide Y?”crystal_why_did_we
”What was said about Z in our last conversation?”crystal_search_messages
”What are the most recent messages?”crystal_recent
”What do I know about this topic overall?”crystal_what_do_i_know
”Is there anything I should know before doing this?”crystal_preflight