2026-04-01

From prompt to plan to a single answer

Why the coordinator pattern wins in production, and how we apply it end to end.

See also: docs/architecture.md, docs/agents-api-and-execution.md

A serious agent stack must expose how it turns a request into a plan, then into work, then into an answer. If that chain is not inspectable, you are buying a label, not a system.

lmkgpt routes every mission through a single analyze step, then an execute step. The first step turns language into a typed plan. The second step runs that plan. If you cannot explain the split to a new engineer in one whiteboard pass, the architecture is too clever.

Anthropic: Building Effective AI Agents makes the case that simple, composable patterns beat opaque stacks. The piece is cited across the industry for a reason: it names orchestrator-style coordination, clear planning transparency, and tight tool boundaries as the default for serious deployments.

What we built

We use a coordinator model to produce a plan and sub-agent briefs, then we run sub-agents against those briefs, then we synthesize. The user sees the plan, the per-agent work, and the final merged answer. Nothing in that list is optional for trust.

So what

If you are buying or building multi-agent software, ask for the plan artifact. If the vendor hides it, you are not looking at a system that can be audited. You are looking at a black box with a brand name.