From Black Box to Glass Box: Real-Time AI Visualization with Brain Canvas
You type a message. You wait. Something happens behind the curtain. A response appears. Was it good? Maybe. Was it right? Hopefully. Do you have any idea what happened between your input and that output? Absolutely not.
This is the state of AI interaction for most people. A black box. You feed it words, it feeds words back, and the entire process in between is invisible. If the answer is wrong, you cannot trace why. If a tool was called, you do not know which one or what it returned. If the AI retrieved something from memory, you have no way to verify whether it pulled the right context or hallucinated the wrong one.
We think that is broken. Not just inconvenient — fundamentally broken. If you are going to rely on an AI system for real work, you need to be able to see it work. That is why we built Brain Canvas.
What Brain Canvas Is
Brain Canvas is Chvor’s real-time visualization layer. Built on React Flow, it renders every component of your AI system as an interactive node graph that updates live as the AI processes your request.
At the center sits the AI brain node — the orchestrator that receives your input and coordinates everything that follows. Radiating outward like a constellation, you see every skill the brain can invoke, every tool it has access to, every communication channel it monitors, and the memory system it draws from. This is not a static diagram. It is a living, breathing representation of your AI’s architecture, and it moves when the AI moves.
When you send a message, the graph comes alive.
Watching the AI Think
The moment your input reaches the orchestrator, Brain Canvas begins reflecting every decision the system makes in real time.
Skill nodes light up when activated. If the AI decides your request requires its code analysis skill, or its web search capability, or its document summarizer, the corresponding node illuminates. You see exactly which skills the AI chose and — just as importantly — which ones it did not.
Tool nodes pulse when invoked. Skills often call external tools to do their work: an API request, a database query, a file operation. Each invocation produces a visible pulse on the relevant tool node. You can watch the AI reach out into the world and pull information back.
Channel nodes show message routing. Chvor supports multiple communication channels — Web, Telegram, Discord, Slack. The channel node for your current session highlights active, so you always know which path your conversation is traveling.
The memory node displays recalled memories and relevance scores. When the AI retrieves context from its persistent memory, you see exactly what it pulled and how relevant the system judged each memory to be. No more wondering whether the AI “remembered” something correctly. You can verify it yourself.
Reading the Edges
The connections between nodes are not just structural lines. They are animated data flows, and their visual language tells you exactly what is happening at any moment.
A dashed blue pulse traveling along an edge means data is flowing — a request moving from the brain to a skill, or a query heading toward a tool.
A solid green glow means successful execution. The skill completed, the tool returned valid data, the operation finished cleanly.
A solid red glow signals failure. A tool timed out, an API returned an error, something went wrong. You see it immediately, not buried in a log file you would never think to check.
A fading trail along an edge indicates memory retrieval — the AI reaching into its stored context to inform the current response.
Within seconds of learning this visual language, you can glance at Brain Canvas and understand the full state of any execution in progress.
Self-Healing, Visible
Here is where things get interesting. Chvor includes automatic self-healing for failed operations, and Brain Canvas makes the entire recovery process visible.
When a tool fails, its node flashes red. Then, without any manual intervention from you, the retry sequence begins. You watch the retry animation pulse along the edge back to the failed node. If the retry succeeds, the node transitions from red to green. If it fails again, the system escalates — trying an alternative tool or degrading gracefully — and you see every step of that decision.
Most platforms bury failures in logs or silently swallow errors. Chvor shows you the failure, shows you the recovery, and lets you verify that the system handled it correctly. No guessing. No hoping.
Two Ways to Execute
Brain Canvas supports two distinct execution modes, and the visual representation changes to match each one.
Constellation mode is the default. The AI dynamically selects which skills and tools to invoke based on the context of your request. The graph topology shifts with every message because the AI is making real-time decisions about what to activate. This is powerful for open-ended tasks where you want the AI to figure out the best approach.
Pipeline mode flips the model. Instead of letting the AI choose, you manually wire the execution graph. You draw the connections between nodes, define the order of operations, and create a deterministic workflow that executes the same way every time. The visualization reflects your wiring — fixed paths, predictable flow, repeatable results.
Constellation mode gives you intelligence. Pipeline mode gives you control. Brain Canvas gives you visibility into both.
Real-Time, Not Reconstructed
Everything you see in Brain Canvas is streamed live over WebSocket. This is not a post-hoc reconstruction of what happened — it is a live feed of the AI’s execution.
The events you receive in real time include skill activation, tool invocation, tool results, memory recall, LLM token streaming, and execution completion. Every state change in the system produces a corresponding visual change on the canvas, with latency measured in milliseconds.
This matters because timing matters. When you can watch the AI process your request in real time, you develop an intuitive sense for how it works. You notice when a skill takes longer than expected. You see when memory retrieval adds latency. You understand the actual cost — in time and computation — of each capability.
Interactive, Not Just Visual
Brain Canvas is not a read-only dashboard. It is an interactive workspace.
Drag nodes to rearrange the layout and organize the graph in whatever way makes sense to you. Click any node to open its detail panel — see configuration, execution history, and current state. Scroll to zoom in on a specific cluster of nodes or zoom out for the full picture.
When emotion modeling is enabled, you will also see emotion particles orbiting the brain node, reflecting the AI’s current emotional state. It is a subtle, ambient indicator that adds another dimension to understanding how the system is behaving.
Why This Matters
Transparency in AI is not an academic concern. It has immediate, practical consequences.
Trust. When you can watch the AI work, you can verify that it is doing what you expect. Trust built on observation is qualitatively different from trust built on hope.
Debugging. When something goes wrong, you can trace the exact path of execution and identify precisely where the failure occurred. No more guessing. No more “try asking it again differently.”
Learning. By watching the AI make decisions — which skills it activates, which tools it calls, how it uses memory — you develop a genuine understanding of how the system works. That understanding makes you a better operator.
Control. Pipeline mode means you are not locked into the AI’s judgment. When you need deterministic execution — for compliance, for reliability, for peace of mind — you can wire exactly the workflow you want and watch it execute exactly as designed.
Transparency Is Not a Feature
We do not list Brain Canvas as a feature on a marketing page and move on. For us, transparency is a design philosophy that runs through every layer of Chvor. Brain Canvas is the most visible expression of that philosophy, but the same principle drives our approach to memory, to reasoning, to data ownership.
If you cannot see what your AI is doing, you do not really control it. And if you do not control it, you are not its operator — you are its audience.
We think you deserve to be more than an audience.
Explore the full documentation to see how Brain Canvas fits into the broader Chvor architecture, or visit our GitHub repository to start building with it yourself. Every component you see on the canvas is open source, self-hosted, and yours.