Getting Started with Meggy

Most AI assistants live in the cloud. Your conversations pass through someone else's servers, your documents get indexed by someone else's systems, and your data becomes someone else's training material.

Meggy takes a different approach. It's a desktop application that runs entirely on your machine. Your conversations, documents, preferences, and API keys never leave your computer. Think of it as having a capable, private assistant who lives in your laptop — one that can manage your files, search the web, control your smart home, and even remember your family's preferences over time.

Under the hood, Meggy is an Electron application that combines multi-agent orchestration, a multimodal RAG vault, 110+ built-in tools, and unified memory into a single desktop assistant. Your data is stored in SQLite encrypted at rest via sqlcipher, and API keys live in the OS keychain — nothing leaves your machine unless you opt into a cloud provider.

Installation

Download Meggy from the Downloads page. We support:

After installation, Meggy auto-updates whenever a new version is available — no manual downloads needed.

The Onboarding Wizard

When you first launch Meggy, the onboarding wizard walks you through initial setup in three simple steps:

  1. Select a Provider — Choose from OpenAI, Anthropic, Google Gemini, OpenRouter, or local runtimes like Ollama, LM Studio, and llama.cpp. You bring your own API key, and Meggy handles the rest.
  2. Enter Your API Key — Keys are validated in real time with regex pattern matching and stored in the OS keychain via Electron's safeStorage. Your credentials never touch the filesystem.
  3. Assign Model Roles — The wizard automatically assigns the best available model for each of Meggy's 8 roles: brain (primary reasoning), fast (utility tasks), embedding (RAG vectors), image generation, video generation, TTS, STT, and web search.

Your First Conversation

Once setup is complete, you'll see a clean chat interface. Here's what happens when you send your first message:

  1. You type a message — It's sent via IPC invoke from the renderer to the main process.
  2. Model resolutionGenerationService loads your settings, resolves the API key from the encrypted keychain, and selects the model via the model router.
  3. Streaming response — The response streams back in real time, with tool calls and results appearing as they happen.
  4. Tool execution — If the model invokes tools (like reading a file or searching the web), they execute with trust-based approval gating. Safe tools run automatically; dangerous ones ask for your permission first.

Try asking something practical like "What's on my calendar this week?" or "Summarize the latest headlines" — you'll see Meggy's tools in action.

What's Next?

Now that you're up and running, explore these core capabilities:

Happy exploring! 🚀