Five memory stores
| Store | Purpose | Example |
|---|---|---|
sensory | Raw observations and signals | ”Andy sounds frustrated about the deploy” |
episodic | Events and experiences | ”We shipped v2 on March 15” |
semantic | Facts and knowledge | ”The API uses Convex for the backend” |
procedural | Silent patterns, runbooks, and how-to memory | ”Deploy with npm run convex:deploy” |
prospective | Plans and future intentions | ”Need to add billing webhooks next sprint” |
Sensory store
Sensory store
Sensory memories capture raw observations — things noticed in the moment that might not be facts yet. Tone, emotional signals, and ambient context land here. These memories have shorter relevance windows and are weighted lower for factual recall.
Episodic store
Episodic store
Episodic memories record events that happened. Deployments, decisions made, meetings held, milestones reached — anything that is a specific occurrence with a time and place. The temporal recall pipeline draws heavily on this store.
Semantic store
Semantic store
Semantic memories hold stable facts and knowledge. Technical details, architecture choices, team structure, product behavior — things that are true and likely to remain true. This is the most frequently searched store for factual questions.
Procedural store
Procedural store
Procedural memories store how to do things: commands, workflows, runbooks, and repeatable processes. When the Context Engine selects Workflow recall mode, it searches this store with elevated weight. Approved skills can be promoted on top of procedural memory, but procedurals remain the quiet execution layer by default.
Prospective store
Prospective store
Prospective memories hold plans and future intentions. Things you want to do, tasks to revisit, features planned for later. The Context Engine surfaces these when the conversation moves toward planning or next steps.
Nine memory categories
Within stores, memories are tagged with a category that describes the specific type of content.| Category | What it represents |
|---|---|
decision | A choice that was made and why |
lesson | Something learned from experience |
person | A person — who they are, their role, ownership areas |
rule | A constraint or policy to follow |
event | Something that happened |
fact | A stable piece of knowledge |
goal | An objective or desired outcome |
workflow | A repeatable process |
conversation | Conversational context and continuity |
crystal_why_did_we, the engine prioritizes decision and lesson memories. When you ask crystal_who_owns, it focuses on person memories. You don’t need to specify the category — the right tool or recall mode brings the right category to the front automatically.
Automatic categorization
When an extraction pass runs after a conversation turn, the LLM assigns both a store and a category to each extracted memory. You don’t tag memories yourself — categorization is inferred from the content. For example:- “We decided to use Convex because of its real-time subscriptions” →
semanticstore,decisioncategory - “Deploy with
npm run convex:deployfrom the repo root” →proceduralstore,workflowcategory - “Sarah owns the billing module” →
semanticstore,personcategory - “Never push directly to
main” →proceduralstore,rulecategory
You can update a memory’s store or category at any time using
crystal_edit. This is useful if an automatic classification doesn’t match how you want the memory to be recalled.