Imagine you need to plan a family vacation. You'd want someone to research destinations, someone else to compare flight prices, another person to find hotels, and someone to build the itinerary. In real life, you'd do all of that yourself. With Meggy's multi-agent system, you create specialized agents that handle each part — and they work together automatically.
Multi-agent isn't about having one AI that does everything. It's about having a team of AI agents, each focused on what they do best, collaborating to get complex jobs done while you do something else.
The agent system is composed of several core components:
Every agent has a sandbox policy that determines its isolation level. The executor implements a strategy pattern with seven modes — the first four work out of the box with zero installation:
| Mode | Isolation | Use Case |
|---|---|---|
| Auto | Decided at runtime from risk score | Most agents — Meggy picks the right level for you |
| None | No isolation — runs in the main process | Your own trusted agents that need full API access |
| Process | Separate Node.js worker | Untrusted agents with basic containment |
| Restricted | Process + filesystem/network enforcement | Imported agents — blocks unauthorized paths and domains |
| Docker | Full container isolation | Maximum lockdown for experimental agents |
| Deno | Deno subprocess with permission flags | Fine-grained OS-level controls |
| WASM | WebAssembly sandbox | Complete isolation with no host access |
Beyond the isolation mode, each policy can include filesystem rules (allow/deny paths, read-only mode), network rules (allowed/denied domains with wildcards), and resource limits (memory cap, CPU time, wall-clock timeout). For the full breakdown, see Agent Sandbox & Security.
To complement sandbox isolation, Meggy ensures every agent execution session runs within a dedicated, ephemeral Workspace:
artifacts/ subdirectory, keeping them contained.When a session completes natively, valid artifacts are ingested automatically into your permanent Vault and linked to the execution trace, after which the temporary workspace is immediately garbage collected.
Agents can operate in two execution modes, depending on how structured the task is:
Pipeline mode is for tasks with a clear sequence of steps. You define a directed acyclic graph (DAG), and the Pipeline Executor resolves the topological order, running steps in parallel where dependencies allow. Each step can be a local agent action or a remote agent call via the A2A client. Think of it as a recipe — Meggy follows the steps in order.
Engine mode is for open-ended tasks. The agent receives a goal, plans its approach, and iterates with tool calls until the objective is met or the budget is exhausted. Think of it as giving your agent a problem and letting it figure out the solution.
Every pipeline run records a step-by-step execution trace so you can see exactly what happened:
| Trace Field | What It Shows |
|---|---|
| Status | Whether the step succeeded, failed, or was skipped |
| Duration | How long the step took to execute |
| Token usage | Input and output token counts per step |
| Model | Which LLM was used for that step |
| Guard result | Whether conditional guards passed or blocked |
| Error strategy | What happened on failure — retry, fallback, skip, or halt |
| Retry count | How many retries were attempted before the step resolved |
Traces are linked to the parent execution run, so you can browse the run history and drill into any pipeline to see exactly which steps succeeded, which failed, and why. This makes debugging multi-step workflows significantly easier than reading log files.
Every time an agent finishes running, the result is wrapped into a structured artifact — a typed, multi-part container that captures exactly what the agent produced. You can think of an artifact as the agent's finished work product, whether that's a research report, a data table, an RSS feed summary, or a sensor reading.
Artifacts are stored locally in your Vault and indexed in a database, so you can:
Meggy auto-detects the content type (report, data, feed, media, sensor, summary) and applies retention policies to keep storage manageable — you set how many artifacts to keep, how old they can be, or how much disk space they're allowed to use.
Complex tasks often require side-processing that shouldn't make the user wait. Meggy supports Forked Agents — lightweight background executions spawned from a parent agent.
Forked agents run in complete isolation with their own message state and execution limits. Because they share the parent's system prompt and model, they benefit from Prompt Cache sharing, running nearly instantly and at a fraction of the cost. These paired with Post-Sampling Hooks enable fire-and-forget background tasks (like memory extraction or episodic summarization) that run entirely off the critical path without blocking the user response.
Before any agent executes, Meggy scores its risk level based on:
When the sandbox mode is set to Auto, Meggy uses the danger score to automatically escalate isolation — a low-risk agent runs with basic process isolation, while a high-risk agent gets bumped to Restricted or higher. Agents classified as dangerous require explicit human approval before each execution cycle. You're always in the loop for high-risk actions.
Meggy implements the Agent-to-Agent (A2A) protocol using JSON-RPC 2.0 to establish a dynamic agent mesh.
research, code-review), advertising which domains they excel at.Every agent is defined by a blueprint — a shareable template that captures everything about how the agent works:
taskTemplate — The system prompt template with variable placeholdersinputFields — Dynamic input fields the user fills in before executionpreferences — Model selection, temperature, and tool permissionsexecution — Strategy (stubbornness, parallelism, sandbox policy)verifyCommands — Shell commands that validate the agent's outputBlueprints can be exported as .agent.md files for sharing and version control. Found a useful agent configuration? Export it, share it with a friend, or save it for later.
Agents also support portable .meggy-agent packages — a single file containing the complete agent definition plus avatar. Import them from the Agents Dashboard to instantly set up agents shared by teammates.
Some agents can spawn multiple sub-agent instances — lightweight copies that run alongside each other with different configurations. Think of it as creating one "Network Monitor" agent template, then launching separate instances for each server you want to watch.
Each instance has its own:
The Orbital Dashboard visualizes running agents and their instances as an interactive constellation. Parent agents appear as large nodes, with instances orbiting them as smaller satellites. You can zoom, pan, right-click to manage instances, and hover for status details — all from a single view.