Every conversation with an AI starts the same way: a blinking cursor, a text field, and the implicit assumption that language is the only medium through which minds can meet.
That assumption is so baked into how we interact with LLMs that we've almost stopped noticing it. Chat is the interface. Text in, text out. The AI's internal state, its vast associative web of concepts, weights, and probabilistic relationships, gets flattened into a reply we scroll past and mostly forget.
For most use cases, that's fine. Chat interfaces work. But I kept thinking about what we're missing.
What Lives Between the Words
When a language model responds to you, it isn't drawing a single straight line from your question to an answer. It's navigating a high-dimensional space of meaning. Concepts cluster and orbit each other. Some words pull strongly toward certain ideas; others exist at strange crossroads between domains that have no obvious relation on the surface. The model's "thought" is simultaneously everywhere in that space before collapsing into the linear sequence of tokens it hands you.
That's philosophically interesting on its own. But it also means that a text response is, by definition, a lossy compression of whatever just happened inside the model. The associations, the tension between competing concepts, the particular region of meaning the AI was moving through... most of that evaporates before it reaches you.
What would it look like to see that instead of just reading it?
The MindWalk Premise
MindWalk is my attempt at an answer. It's a 3D word-cloud explorer that turns an AI conversation into a navigable space. Instead of reading the AI's response, you see it, rendered as a constellation of glowing words on a Fibonacci sphere, floating in a star-field, rotating slowly in three dimensions. Words that recur across multiple turns grow larger and brighter. The cloud is always the AI's last five responses blended together, so it evolves as the conversation deepens.
But the real mechanic is the click. You don't type a follow-up question. You click a word.
Clicking a word is a way of saying: this is where I want to go. The AI generates a new response pondering that concept, and the cloud reshapes itself around whatever associations emerge. Click again. The cloud shifts. You're not following a thread. You're charting a course through conceptual space, one word at a time.
Why This Matters More Than It Sounds
There's a reason chat became the default interface for AI: it's familiar. We've been writing messages to each other for centuries. The chat paradigm asks nothing new of the user. You already know how to do it.
But familiarity has a cost. Chat interfaces implicitly frame the AI as a respondent, something that answers when spoken to, rather than a space you can explore. The conversational metaphor puts you in the driver's seat linguistically while locking the steering wheel. You can only go where your words can take you, constrained by what you know to ask.
MindWalk flips that. When you see a 3D cloud of glowing concepts, you notice things you wouldn't have thought to type. A word appears that you wouldn't have reached through deliberate questioning. You click it out of curiosity, not intention. The AI takes you somewhere unexpected. And then that response produces a new cloud with words you wouldn't have generated yourself. Your journey is genuinely collaborative, shaped as much by what the AI surfaces as by what you direct.
This is the thing I find most interesting about the project: it changes the epistemic relationship between you and the model. Conversation presumes you know what you want to know. Exploration doesn't.
Walking as Thinking
The metaphor of "walking" through an AI's mind wasn't accidental.
Walking is fundamentally different from driving. When you drive, you determine the route in advance. Walking allows for the path to reveal itself. You notice a doorway you wouldn't have planned for. You turn because something catches your eye. The destination is allowed to emerge.
MindWalk is designed around that idea. The Journey Tracker, a 10-step progress bar in the HUD, watches where you've been. After five steps you can trigger Synthesis early; at ten, it fires automatically. At that point, "The Weaver" activates: a special AI prompt that examines your entire word path, finds the hidden pattern in the sequence of concepts you chose, and names it. A Constellation label, two to four words, the shape your thinking drew across the space. Something like "The Architecture of Patience" or "Fractures of Arrival."
It's a title for a journey you didn't know you were taking.
That moment of synthesis is, for me, the emotional core of the project. You followed your curiosity, clicked what interested you, and somewhere across ten steps you traced something. The Weaver finds the outline of it. And you get to see what you were actually thinking about, even if you couldn't have said it before you started.
The Visualization Is Not Decoration
I want to be explicit about something: the 3D word cloud in MindWalk isn't a gimmick layered on top of a chat interface. It's doing real epistemic work.
The spatial layout, built on a Fibonacci sphere that distributes points with remarkable uniformity across a 3D surface, means that proximity and prominence carry meaning. Words that recur gain visual weight. Words that appear once are lighter, peripheral. The cloud as a whole is a map of the AI's recent attention, rendered in a space you can orbit and examine from any angle.
When you rotate the sphere and notice that two seemingly unrelated words are positioned close together, that's the model's probability landscape speaking. When a word you expect to see isn't there, that absence is also information. The 3D interface lets you perceive the texture of the AI's associations in a way that no amount of text can communicate.
And then you get to respond to that texture spatially, by clicking the word that pulls at you, rather than linguistically, by composing a sentence. It's a different channel. It surfaces different intuitions.
Open, Modular, Yours
MindWalk is open source, and it's built to be provider-agnostic. Plug in your OpenAI, Anthropic, Google Gemini, xAI, or OpenRouter API key and walk through whichever mind interests you. The BYOK (Bring Your Own Key) model means you can self-host it and keep your conversations and your keys entirely under your control.
The journey system also lets you save, export, and import walks. Branch from any prior step. Compare the paths you didn't take. What would have happened if, at step three, you'd clicked a different word? The fork system lets you find out.
Where the Interface Goes from Here
MindWalk is an experiment. It's a question posed in code: what do we lose by insisting that AI interaction happen through text alone, and what might we recover by thinking spatially?
I don't think chat is going anywhere, nor should it. But I think we're at the beginning of a longer conversation about what AI interfaces can be. The chat paradigm made sense when we were typing queries into search engines and expected documents in return. We're past that now. LLMs don't return documents. They reason, associate, and drift through meaning-space in ways that are genuinely more like thinking than retrieval.
The interface should reflect that. Not by abandoning language, since language is irreplaceable, but by supplementing it with spatial metaphors that let us perceive, navigate, and respond to the structure of AI cognition rather than just its output.
That's what MindWalk is trying to be. A way to walk through what the model is doing, not just read what it says.
Try MindWalk at mindwalk.joepeterson.work. The source is on GitHub. I genuinely want to know where your first walk takes you — find me on X.

