How Meggy Remembers Things About You

Most AI assistants get memory wrong in the same way: you tell them something, they tuck it away, and a few weeks later they happily contradict themselves. "Your favorite color is blue!" they declare — except you told them three days ago it's actually cerulean blue, and now they've kept both claims, picked the louder one, and answered with a stale fact.

Meggy is built differently. As of May 2026, every persona — your bot, you, your contacts — has memory split into two clearly separated layers, each tuned for what it does best.

Two Layers, Each With a Job

Layer 1 — Your typed profile

Anything that has one value at a time — your timezone, your favorite color, your employer, your birthday — lives in a small typed profile attached to your identity. Meggy ships it into the assistant's working memory at the start of every conversation as the authoritative answer:

# USER FACTS
- Name: Nasso
- Timezone: Europe/Sofia
- Favorite color: cerulean blue
- Employer: Acme

When you say "remember my favorite color is cerulean blue", Meggy doesn't tack a new bullet onto the bottom of a long memory list. It calls a single dedicated tool — attribute_set — which closes out the previous value (whatever was there before, with a timestamp) and writes the new one. The old row stays around for history, marked superseded; the new row becomes the one the assistant sees.

So when you ask the next day "what's my favorite color?" the answer is in the prompt itself, framed as authoritative. The assistant doesn't have to search; it doesn't have to weigh several stale candidates and pick the loudest. It just reads.

Layer 2 — Narrative memory

Some things don't fit into a key/value box. "She's been a vegetarian since college", "Recurring conflict with manager Sam over Q3 planning", "Likes their morning coffee strong but only in winter" — these are stories, not attributes. They live in Meggy's narrative store, which is searched on demand using a hybrid keyword + semantic-similarity engine.

When you ask a question that isn't covered by the typed profile, the assistant fetches a few relevant memories and the system prompt explicitly labels them provisional: hints to verify against the conversation, not gospel. If the assistant finds something stale, it can update the memory in place (a tool called memory_supersede) — the old version stays in the audit trail, but only the new version surfaces in future recalls.

Why the Split Matters

Combining both kinds of facts in one bag — the way ChatGPT's memory and most off-the-shelf solutions do — is the source of the stale-memory bug. If the assistant's working memory contains four bullets that all mention your favorite color, three saying "blue" and one saying "cerulean blue", the model picks the majority and gets it wrong. Worse, well-meaning models will sometimes write changelog entries themselves: "User updated their favorite color from tan to blue, replacing previous preference for cream." That's not a fact; that's a story about a fact, and storing it pollutes future recall.

Meggy's writer rejects those changelog narratives at the source. The supersession history is recorded by the database itself — every time you change a fact, a row is closed out with a timestamp pointing at its replacement. You never have to write it down; the system already knows.

Trust Framing — Borrowed From the Best

Anthropic's memory tool for Claude takes a clear position about how injected memory should be treated, and Meggy follows the same pattern. The system prompt that Meggy hands to its language model contains an explicit rules block:

The USER FACTS block above is authoritative for this turn. Do not ask the user to confirm a listed fact. Do not search memory for a fact already listed. If the user contradicts a fact, call attribute_set with the new value. Never write changelog narratives — the system records supersession history automatically.

This single paragraph closes a class of failure modes that show up across most agentic memory systems: models second-guessing facts that are already present in their context, asking redundant clarification questions, and inventing changelog-style memories that compete with the structured truth.

Bi-Temporal Memory — Audit, Not Amnesia

Both layers — typed profile and narrative store — record their changes the same way. Every row carries a valid_from timestamp (when it became current) and an optional valid_to timestamp (when it was superseded). A row that's currently true has valid_to = NULL; a row that's been replaced has both stamps and a pointer to its successor.

This is the same approach Graphiti (and traditional database temporal modeling) uses. It means:

What This Means for You

In day-to-day use, you don't need to think about any of this. You tell Meggy what you like, what changed, who's who. Meggy keeps the typed profile clean, lets narrative memory accumulate the rich texture of how you actually live, and frames every recall to the language model so the answers stay grounded.

In the Identity view in Settings, you'll see your profile broken into a ## Profile section at the top — your typed facts, one per line. Below that are richer notes and narrative memories that have accumulated naturally. If you ever want to know how a fact got there, the audit history is one click away.

If you've used other AI assistants and felt the slow drift into stale memory bloat, this is the architecture that prevents it. It's local-first, it's fully transparent, and the assistant always knows which facts to trust and which to verify.

Related Reading