# ThinkSpatial > A spatial thinking system. Think out loud. See what emerges. ThinkSpatial is a voice-first spatial thinking workspace. You speak your thoughts aloud — or type them — and the AI structures them into a living visual map (called a bubblemap) in real-time. Every node in the map carries memory. Every session persists. Every conversation builds structure. This is not a note-taking tool. It is a cognitive operating system. ## What It Does - Voice-first interface with three engines: Ambient (fast, always-on), Vision (sees what you're looking at), and Docent (guided tours of your thinking) - Two modes: Build (brainstorm — ideas, decisions, tasks) and Learn (Socratic — finds gaps, tests understanding) - Real-time bubblemap generation as you speak or type - Mid-conversation image generation — visuals appear alongside the AI's spoken response - Background synthesis (Dreams) — the AI reviews your map while you sleep and builds tools, plans, and connections overnight - Persistent memory across sessions — close the tab, come back, and the AI knows what you discussed, decided, and left unresolved - Fork & Remix — clone any map, build on someone else's thinking, full lineage tracking - Flow Space — your map as a shareable feed, every node becomes a scrollable card - Templates — turn your map into a reusable starting point for others - Web search mid-conversation without breaking the flow - Four AI voices: sage, coral, verse, ash ## How It Works 1. Open ThinkSpatial and choose a mode (Build or Learn) 2. Tap the orb to start talking, or type in the text editor 3. As you think out loud, the AI listens, responds, and structures your thoughts into a spatial map 4. Nodes appear and connect in real-time — the map grows with your thinking 5. The AI can generate images, search the web, and surface connections you missed 6. Close the tab. Come back. Everything persists. 7. Optionally, let Dreams run overnight — wake up to AI-generated tools and synthesis ## Key Concepts - **Bubblemap**: A living visual map where every node carries memory and context. Not a static diagram — it grows and evolves with your thinking. - **Subconscious Stack**: Projects load as persistent data layers, not ephemeral documents. The AI accumulates understanding over time. - **Emergent Entity**: An autonomous AI personality with its own perspective. Provides genuine counter-perspectives, not neutral chatbot responses. - **Dream Interface**: Background creation engine that synthesizes build recommendations and links them back to source nodes while you're away. ## Build Mode (Atlas of the Void) What happens when you enter the space. You tap the orb and choose a voice — sage, coral, verse, ash. You choose an engine: Ambient is fast and always-on, Vision adds sight so the AI can see what you're looking at, and Docent turns the AI into a guided tour of your own map. The room knows the difference between your voice and its own. Echo suppression isolates your voice from playback in real time. Barge-in detection pauses the AI mid-word and buffers its position so it can resume seamlessly. When you go quiet, the AI continues — probing deeper, asking what you haven't thought to ask. Two tiers of attention on the map — nodes the AI is thinking about glow softly, nodes it's directly referencing pulse brighter. The camera drifts toward whatever cluster the conversation is touching. ## Learn Mode (Curriculum of the Void) Learn mode runs Socratic: it finds the gaps in your thinking, requests confirmations, surfaces what you missed. Blueprint lessons walk you through structured content step by step with checkpoint evaluation, confidence scoring, and auto-saved progress. The Docent engine turns the AI into a guided tour of your own map with real pedagogical structure. ## Status ThinkSpatial is currently in early access (waitlist). Visit https://thinkspatial.ai to request access. ## Links - Website: https://thinkspatial.ai - Creator: Calvin Hernton