Have you ever had to repeat yourself to an AI assistant? Tell it your name again, explain your project setup for the third time, remind it that you prefer TypeScript over JavaScript and dark roast over medium? Most AI assistants forget everything the moment you close the conversation.
Meggy doesn't. Its identity-scoped memory engine remembers your conversations, preferences, and documents — building a persistent, searchable knowledge base about you and your world. The more you use it, the less you have to explain.
Every person in your household gets their own identity-scoped memory space. Facts, preferences, relationships, episodic conversation summaries, procedural instructions, and vault document chunks all live in a single memories table, organized into 7 tiers and searchable through a hybrid retrieval pipeline that combines vector cosine similarity with FTS5 full-text search.
The memory engine organizes information into 7 distinct tiers:
Discrete, declarative pieces of information extracted from conversations. For example:
Facts are stored with embeddings so they can be retrieved semantically when relevant to a new conversation. Over time, Meggy builds a rich understanding of your preferences, routines, and family.
Each fact also carries source-channel provenance — a record of which platform it was learned on (desktop, Telegram, WhatsApp, etc.). This means you can see where a specific memory came from, and facts confirmed across multiple channels carry independent verification.
User preferences like communication style, language, units, or tool settings. Preferences use a sub-key system so only one value per category is kept — setting a new display theme automatically replaces the old one.
Structured relationship data between people in your household. "Sophie is your daughter" or "Mike is your colleague at Acme Corp." These power contextual understanding across conversations.
After each conversation, the memory engine generates a compressed summary that captures the key decisions, outcomes, and context. These summaries provide continuity across sessions without replaying entire conversation histories. Last week's research session informs this week's follow-up.
Rules and instructions the AI should follow. "Always use metric units" or "Draft emails in a formal tone." These shape behavior across all conversations.
Short-term notes that are automatically cleaned up. Useful for temporary context like "Remind me about the 3 PM call" that doesn't need permanent storage.
Chunks from ingested vault documents (see the Vault article) are also accessible through the memory engine. This means a search for "database schema" can return both conversation facts and technical documentation in a single result set.
Not everyone wants the same level of memory. Meggy includes memory presets that control how aggressively facts are stored and recalled:
When the AI needs context, the memory engine runs a two-stage retrieval:
Stage 1 — Parallel search:
Stage 2 — Fusion and ranking: Both result sets are merged using Reciprocal Rank Fusion (RRF), producing a single ranked list that captures both semantic relevance and keyword precision.
The AI can actively manage memory during conversations:
memory_upsert — Save or update a fact, preference, or observation with automatic deduplicationmemory_list — Browse stored memories with optional filtering by tier and categoryconversation_recall — Search past conversations and episode memories using hybrid FTS5 + vector retrievalLarge context windows are great, but accumulating stale tool results over a long session can bury important facts and slow down inference.
Meggy uses Micro-Compaction to manage the active context window. As the conversation stretches on, old or overly large tool results (like raw API payloads or lengthy scraped web pages) are dynamically replaced with compact summaries. This reclaims token budget and keeps the model focused on the actual discourse without flushing the cached system instructions.
Before each AI response, the memory engine automatically injects relevant context:
embedding model roleThis happens transparently — Meggy always has access to its accumulated knowledge without you needing to manually provide context. It just knows.
Meggy's autonomous agents — such as Telegram community managers or research assistants — share the same unified memory store. Each agent gets its own identity-scoped memory space, which means:
For more on how agents integrate with the persona system, see Unified Personas.