Knowledge Layer
Most teams lose knowledge the same way: a decision made in Slack, a client quirk buried in an email thread, a retro insight that evaporates by Friday. The knowledge layer is where that content goes to stay — structured, searchable, and available to every assistant and workflow your team runs.
Caches: your team's library
Think of a cache as your team's library. Shelves for decisions, people, tasks, and meeting notes. A catalog that knows what's on every shelf. Cross-references from one entry to another so a decision can point back to the evidence behind it. Everything you capture has a place, and every place is indexed.
Most plans include one library — the whole team working against a single body of knowledge. Higher plans open additional caches so you can keep contexts cleanly separated, for example a library per client engagement alongside an internal operating library. Chat, blueprints, and MCP always work against one cache at a time.
Chunks: the unit of memory
Everything inside a cache is a chunk. A decision, a meeting note, a person, a task, a link to a document — each is one chunk with a title, body, and a vector embedding that makes it findable by meaning rather than exact words.
Chunks carry their content in a flexible shape. A short convention helps both humans and assistants scan them quickly: structured facts at the top, then the body below a separator.
Status: Approved
Owner: Priya
Decided: 2026-03-14
---
We are switching payment providers to Stripe effective Q3.
Reasons: lower fees on EU cards, native usage-based billing,
and existing team experience. Migration plan in project P-142.That's one chunk. Search it by meaning and it surfaces for "why did we leave our old payment provider" as readily as for "Stripe decision". Same for every row in a database, every meeting note, every person record — all chunks underneath.
Databases and the tree
Chunks sit in a tree. At the top of each cache are the places you expect to look: an Inbox for captured-but- unsorted items, plus databases like Tasks, People, Decisions, and Meeting Notes. Start from a template and the scaffolding is already there.
Databases are not folders of loose text. Each one carries typed columns — a status, a date, a relation to another row — so the content behaves like a table you can filter and sort, while each row is still a full chunk with narrative body text attached. One model, two views.
Search that follows the question
Search is semantic. You can ask for "what did we agree about refund policy" and get back the Decisions row from March even though it never uses the word "refund". The assistant does this automatically in chat: before it answers, it pulls the chunks most likely to matter, then reads them before responding.
Results stay scoped to the active cache. The assistant works from your team's content plus whatever integrations you've explicitly connected, not the open web — and when your plan includes multiple caches, each one stays cleanly separated from the others.
Links: the relationships between ideas
Parent-child hierarchy handles where a chunk lives. Links handle how it relates to other chunks. A decision can reference the evidence behind it. A task can block a project. A learning can cite the meeting where it came from.
Links are labeled — supports, contradicts, expands, references, and a few more — so the graph keeps meaning instead of collapsing into a flat web. The result is content that compounds: every new note has somewhere to connect, and old notes stay reachable through multiple paths.
How content gets in
The knowledge layer is deliberately open to input. Content can arrive:
- Directly in the dashboard, typed or pasted into a database row or note.
- From Chat, when you ask the assistant to capture something and it creates a chunk for you.
- From MCP clients like Claude Desktop or Cursor — the assistant reads and writes to the cache as if it were a native tool.
- Through Blueprints that process inputs on a schedule or on demand.
- Through Connections that sync from Gmail, Google Calendar, or other supported services.
Every path ends at the same place: a chunk, in a cache, ready for search, links, and workflows to find it.
Your content, yours to take
The knowledge layer is designed to be inspectable. Every chunk is visible in the dashboard with its full history. Export and deletion are first-class so you can move content, relocate it to another cache, or remove it entirely according to your internal policies. The memory is yours — not locked behind a black box you can't reach into.