Brain Canvas
The real-time visualization system that makes AI reasoning transparent. See every skill, tool, and channel as a live, interactive node graph.
6 min read
Brain Canvas
The Brain Canvas is Chvor’s real-time visualization layer. Built on React Flow, it renders every component of your AI system — skills, tools, channels, and memory — as an interactive node graph. When the AI thinks, you watch it happen.
Most AI platforms are black boxes. You type a prompt, wait, and hope for a good answer. Brain Canvas takes the opposite approach: every decision the model makes is visible, traceable, and debuggable in real time.
Node types
Every entity in your Chvor instance is represented as a node on the canvas. Nodes are color-coded and grouped by function.
Skill nodes
Skill nodes represent behavioral definitions — the personalities and capabilities your AI can adopt. Each skill node displays:
- The skill name and active trigger pattern
- Which tools the skill has access to
- Whether the skill is currently executing (pulsing border animation)
┌─────────────────────────┐
│ ★ Planner │
│ Trigger: /plan * │
│ Tools: 3 connected │
│ Status: ● idle │
└─────────────────────────┘
Tool nodes
Tool nodes represent capabilities — web search, filesystem access, code execution, or any connected MCP server. When a tool is invoked during a conversation, its node lights up and the edge connecting it to the active skill animates.
Channel nodes
Channel nodes represent communication endpoints. Each channel your instance is connected to (Web Chat, Telegram, Discord, Slack) appears as a node. Incoming messages cause a brief pulse on the corresponding channel node, and you can trace the message path from channel through the orchestrator to the LLM and back.
Memory node
The memory node represents Chvor’s vector store. During retrieval, the node displays the number of memories recalled and their relevance scores. Edges animate from the memory node to the active skill, showing context being injected into the prompt.
Edge animations
Edges are not just static connections — they animate during execution to show data flow in real time.
| Animation | Meaning |
|---|---|
| Dashed pulse (blue) | Data flowing between nodes |
| Solid glow (green) | Successful tool execution |
| Solid glow (red) | Tool execution failed |
| Fading trail | Memory retrieval in progress |
Edge animations are driven by WebSocket events from the server. Every tool call, memory lookup, and LLM token stream is broadcast as an event, and the canvas renders each one as a visual transition.
Execution modes
Brain Canvas supports two fundamentally different wiring modes that control how skills and tools interact.
Constellation mode (default)
In Constellation mode, the AI decides which skills and tools to use. You define the available components, and the model dynamically selects the right combination based on the conversation context.
# No explicit wiring needed — the AI figures it out
mode: constellation
skills:
- planner
- coder
- researcher
tools:
- web_search
- filesystem
- code_execution
On the canvas, Constellation mode renders all nodes in a gravitational layout. Active connections appear dynamically as the model selects skills and tools. Inactive nodes drift gently to the periphery.
Constellation mode is ideal when:
- You want the AI to adapt to unpredictable queries
- Your skills cover different domains and rarely overlap
- You are prototyping and exploring what combinations work
Pipeline mode
In Pipeline mode, you manually wire the execution graph. You drag edges between nodes on the canvas to define exactly which tools each skill can access and in what order.
mode: pipeline
pipeline:
- skill: researcher
tools: [web_search]
next: planner
- skill: planner
tools: [filesystem]
next: coder
- skill: coder
tools: [code_execution]
Pipeline mode renders nodes in a left-to-right flow layout. Edges are fixed and visible at all times. This mode is useful when:
- You need deterministic, repeatable behavior
- Tasks follow a clear multi-step process (research, plan, execute)
- You want to restrict which tools each skill can access
You can switch between modes at any time from the canvas toolbar. Your wiring is preserved when switching — Pipeline wiring is saved even when you switch to Constellation, so you can toggle back without losing your graph.
WebSocket event system
The Brain Canvas stays in sync with the server through a persistent WebSocket connection. The server pushes structured events that the canvas interprets and renders.
Key event types:
// Skill activation
{ type: "skill:activate", skillId: "planner", timestamp: 1711234567890 }
// Tool invocation
{ type: "tool:invoke", toolId: "web_search", input: { query: "..." }, skillId: "planner" }
// Tool result
{ type: "tool:result", toolId: "web_search", status: "success", duration: 1243 }
// Memory retrieval
{ type: "memory:recall", count: 5, topScore: 0.94 }
// LLM token streaming
{ type: "llm:token", content: "Here is", skillId: "planner" }
// Execution complete
{ type: "turn:complete", skillId: "planner", toolCalls: 3, duration: 4521 }
The client consumes these events in the Zustand store and maps them to React Flow node and edge state updates. If you disconnect and reconnect, the canvas requests the current state snapshot from the server to resync.
Interacting with the canvas
The canvas is not just a display — it is an interface.
- Click a node to open its detail panel (configuration, logs, recent invocations)
- Drag an edge between nodes in Pipeline mode to wire them together
- Right-click a node to access actions: disable, configure, view logs, remove
- Scroll to zoom, drag to pan — standard React Flow controls
- Minimap in the bottom-right corner for orientation in large graphs
- Search with
Ctrl+K/Cmd+Kto find and focus any node
Why transparency matters
Brain Canvas exists because trust requires visibility. When an AI assistant makes a decision — choosing a tool, recalling a memory, routing through a skill — you should be able to see that decision as it happens and understand why it was made.
This is especially important for self-hosted deployments where you are responsible for your own data. Brain Canvas gives you the same observability you would expect from any production system: real-time monitoring, execution traces, and the ability to intervene when something goes wrong.
Every execution on the Brain Canvas is also logged to SQLite, so you can review past sessions, debug failures, and understand how your AI’s behavior evolves over time.
Self-healing
Chvor includes a self-healing system that monitors tool executions and skill activations on the brain canvas. When a tool times out, returns an error, or produces invalid output, the system automatically:
- Detects the failure and marks the node on the canvas
- Adjusts parameters — retries with modified inputs, different timeouts, or fallback strategies
- Recovers gracefully — the conversation continues without the user noticing a disruption
This is visible on the brain canvas in real time: you’ll see the failed node flash, the retry attempt animate, and the successful recovery complete. No manual intervention required.
Next steps
- Skills — learn how to define the behavioral nodes that appear on the canvas
- Tools & MCP — connect external tools and watch them light up on the graph
- Channels — see how messages flow from different platforms through the canvas