Have you ever had to repeat yourself to an AI assistant? Tell it your name again, explain your project setup for the third time, remind it that you prefer TypeScript over JavaScript and dark roast over medium? Most AI assistants forget everything the moment you close the conversation.
Meggy doesn't. Its unified memory engine remembers your conversations, preferences, and documents — building a persistent, searchable knowledge base about you and your world. The more you use it, the less you have to explain.
Facts, episodic conversation summaries, and vault document chunks all live in a single SQLite store, searchable through a hybrid retrieval pipeline that combines vector cosine similarity with FTS5 full-text search.
The memory engine manages three distinct categories of information:
Discrete, declarative pieces of information extracted from conversations. For example:
Facts are stored with embeddings so they can be retrieved semantically when relevant to a new conversation. Over time, Meggy builds a rich understanding of your preferences, routines, and family.
After each conversation, the memory engine generates a compressed summary that captures the key decisions, outcomes, and context. These summaries provide continuity across sessions without replaying entire conversation histories. Last week's research session informs this week's follow-up.
Chunks from ingested vault documents (see the Vault article) are also accessible through the memory engine. This means a search for "database schema" can return both conversation facts and technical documentation in a single result set.
Not everyone wants the same level of memory. Meggy includes memory presets that control how aggressively facts are stored and recalled:
When the AI needs context, the memory engine runs a two-stage retrieval:
Stage 1 — Parallel search:
Stage 2 — Fusion and ranking: Both result sets are merged using Reciprocal Rank Fusion (RRF), producing a single ranked list that captures both semantic relevance and keyword precision.
The AI can actively manage memory during conversations:
remember_fact — Store a new fact extracted from the conversationrecall_facts — Search memory for relevant context given a queryforget_fact — Remove an outdated or incorrect factlist_facts — Browse stored facts with optional filteringBefore each AI response, the memory engine automatically injects relevant context:
embedding model roleThis happens transparently — Meggy always has access to its accumulated knowledge without you needing to manually provide context. It just knows.