Control Plane
The mutable periphery — Gateway, Brain, and Memory — that handles requests, reasoning, and state.
The Mutable Periphery
The control plane is the outermost layer of KRAIT's architecture. It contains the components that interact directly with users, LLM providers, and external systems. Everything here is designed to be replaceable and restartable without affecting the security guarantees of the inner planes.
Gateway
The Gateway is a Plug-based HTTP interface running inside a Phoenix endpoint. It accepts incoming requests, authenticates callers against configurable policies, and routes messages to the appropriate Brain process. Because it is built on standard Phoenix/Plug conventions, it inherits all the production-grade features you would expect: telemetry, rate limiting, and graceful shutdown.
Each inbound request spawns a supervised task, ensuring that a slow or failed request never blocks others. The Gateway also exposes a WebSocket channel for streaming responses during long-running evolution cycles.
Brain
The Brain implements a ReAct (Reasoning + Acting) loop powered by an LLM. On each iteration, it receives an observation, generates a thought, selects a tool or action, and processes the result. The loop continues until the Brain reaches a final answer or exceeds the configured step limit.
Tool calls are dispatched through a behaviour-based skill system. Each skill is a module that implements c:Krait.Skill and declares its own input schema, output schema, and security classification. The Brain cannot invoke tools that have not been explicitly registered and validated against KRAIT rules.
When the Brain determines that a new capability is needed, it does not modify itself directly. Instead, it emits a structured evolution proposal that enters the analysis plane.
Memory
KRAIT's memory subsystem uses ETS tables for fast, in-process working memory during a conversation and pluggable persistent backends for long-term storage. Supported backends include PostgreSQL via Ecto, SQLite for single-node deployments, and a vector store adapter for semantic retrieval.
Memory is scoped per-conversation by default, but agents can be configured with shared memory namespaces for collaborative scenarios. All memory writes are journaled so that the evolution pipeline can reason about what the agent has learned over time.