Concept for better mental models in persistent AI conversations
[ABSTRACT]
Large language models now support persistent conversations across multiple chat sessions, but current terminology creates user confusion that leads to context exhaustion and hallucination. This post proposes “Conversation Scroll” as a new terminology for the cognitive framework that sits above
many “chat threads”, a terminology shift that mirrors natural
human conversation behavior, solves the refresh vs restart problem, and enables intelligent auto refocus features.
[1] THE PROBLEM: WHEN AI CONVERSATIONS BREAK DOWN
[1.1] Context Exhaustion is Real
If you’ve used ChatGPT, Claude, Mistral, Gemini or any modern LLM long enough, you’ve experienced this: the conversation starts great, then gradually degrades. The AI begins contradicting itself. It forgets what you discussed earlier. It hallucinates details. You’re stuck in a broken conversation.
The technical reason? Context window exhaustion. Every message adds tokens, and eventually the model’s attention span is stretched too thin.
The human reason? its because we don’t know when to refresh.
[1.2] The Refresh vs Restart Dilemma
Modern AI platforms offer “persistent projects” or “workspaces” where context carries across multiple chat sessions. This is powerful, you can return days later and the AI remembers your work.
But here’s what users struggle with:
– When should I start a new chat thread?
– Will starting a new thread lose my context?
– Am I “starting over” or “continuing”?
– Is this still the same conversation?
Without clear answers, users either:
A) Never start new threads – leading to context exhaustion
B) Start new threads too often – losing valuable context
C) Don’t use persistent workspaces at all – missing the feature entirely
[1.3] The Terminology Problem
Current terms like “Project,” “Workspace,” or just a “Folder” don’t help. They sound static. They imply storage, not conversation. They don’t communicate the dynamic behavior: persistent context with periodic refresh.
The terminology fails to teach users what they’re actually doing.
[2] THE BIOLOGICAL INSIGHT: HOW HUMANS ACTUALLY TALK
[2.1] We Already Do This
Here’s the breakthrough: humans naturally create “refocus moments” during long conversations without ever “starting over.”
Think about a long discussion with a friend:
- You sip your coffee – brief pause, same conversation
- You take a breath – reset your attention, still engaged
- You blink – micro refresh, continuous focus
- You shift your posture – change position, same discussion
- You say “okay, so…” – verbal transition, same topic
These are refocus moments. You’re not restarting the conversation. You’re not losing context. You’re maintaining continuity while giving your brain micro refreshes.
[2.2] Why This Matters for AI
AI conversations benefit from the exact same pattern:
- Starting a new chat thread = Taking a breath
- Maintaining the workspace = Continuous conversation
- Refocusing attention = Mental clarity without context loss
The problem isn’t technical, LLMs can handle this. The problem is cognitive, users don’t have a mental model that matches this behavior.
[2.3] The Biological Parallel
Just as humans don’t “start a new conversation” every time they blink or sip coffee, AI users shouldn’t feel they’re “starting over” when creating a new thread in the same workspace.
It’s a natural refresh pattern, not a restart.
This isn’t theory, this is how human attention works. We need periodic refreshes to maintain focus. Why would AI conversations be different?
[3] THE SOLUTION: A SIMPLE COGNITIVE SHIFT
[3.1] Introducing “Conversation Scroll”
Replace workspace terminology with “Conversation Scroll.”
Why “scroll” works:
- Scrolls are continuous, one long document, not separate files
- Scrolls are sectioned, you unroll what you need, when you need it
- Scrolls preserve history, earlier sections remain accessible
- Scrolls support re-reading, you can scroll back through previous parts
Each chat thread is a new section of the same scroll.
[3.2] Mental Model Transformation
Old thinking (with “Project” or “Workspace”):
“Starting a new thread means starting over”
New thinking (with “Conversation Scroll”):
“Starting a new thread means unrolling a new section of the same scroll”
That’s it. That’s the shift.
[3.3] Why Terminology Matters
This isn’t just semantics. The right metaphor teaches users how to behave:
- “Scroll” implies continuity + sections
- “New thread in this scroll” clarifies the relationship
- “Scroll back” naturally describes reviewing past threads
- “Unroll more” suggests forward progress
Users immediately understand: same conversation, fresh focus.
[3.4] Real World Validation
This concept emerged during my private chat while using LLM platform. The disconnect between “Project” terminology and actual behavior became obvious.
Testing the “scroll” language with tag numbering across threads:
Thread 1: #01 – #07 (exploration)
Thread 2: #08 – #17 (refinement)
Users immediately understood they were “continuing the scroll” rather than “starting a new project.” The terminology did the teaching.
[4] CURRENT STATE: WHAT’S WRONG RIGHT NOW
[4.1] Across the Industry
Let’s look at how major LLM platforms handle this:
ChatGPT: “Projects” with threads
- Users confused about when to start new threads
- No guidance on context management
- “Project” sounds singular, not continuous
Claude: “Projects” with conversations
- Same confusion around thread behavior
- “Folder” metaphor implies static storage
- Users exhaust context in single threads
Gemini: “Workspaces” (varies)
- “Workspace” is better but still static
- Doesn’t convey the refresh pattern
[4.2] The Common Failure
None of these terms communicate:
- That context persists across threads
- That new threads are refocus moments, not restarts
- When users should start new threads
- Why this matters for conversation quality
The result? Features that could prevent hallucination and context exhaustion go unused or misused.
[4.3] User Behavior Patterns
What actually happens:
Pattern A: The Mega Thread User
- Never starts new threads
- Context window fills up
- Quality degrades over time
- Blames “AI getting dumber”
Pattern B: The Fresh Start User
- Starts new projects constantly
- Loses valuable context
- Repeats themselves
- Frustrated by lack of memory
Pattern C: The Confused User
- Doesn’t understand the feature
- Sticks to basic chat
- Never experiences the benefit
[5] THE PATH FORWARD: IMPLEMENTATION IDEAS
[5.1] Phase 1 – Terminology (Immediate)
Platform providers can implement this today:
- Rename “Projects/Workspaces” to “Conversation Scrolls” (or “Scroll Folders”)
- Update UI language: “Start new thread in this scroll”
- Change icons from folders to scroll metaphors
- Update documentation and tooltips
Cost: Minimal (just terminology)
Impact: Immediate cognitive clarity
Risk: Near zero
[5.2] Phase 2 – Smart Notifications (Near-term)
Add intelligent context monitoring:
When a thread approaches context limits, show:
“This thread is getting context heavy. Start a new refocused thread in this scroll?”
[Create New Thread] [Continue Here]
This:
- Prevents hallucination before it happens
- Teaches users the scroll paradigm contextually
- Remains optional (power users can ignore)
- Reinforces the biological parallel
[5.3] Phase 3 – Seamless Scrolling (Future)
Advanced UI concepts:
- Visual “scroll map” showing thread sections
- One-click navigation between thread sections
- Auto-thread creation at natural break points
- “Scroll view” that displays threads as continuous flow
Note: Phase 1 solves 80% of the problem. Phases 2-3 are enhancements, not requirements.
[5.4] Beyond Single Platforms
This isn’t just for one company. Any platform offering persistent AI conversations can benefit:
- Consumer chatbots (ChatGPT, Claude, Gemini, etc.)
- Enterprise AI tools (internal chatbots, agents)
- AI coding assistants (Cursor, Copilot, etc.)
- Creative tools with AI features
The cognitive pattern is universal: humans need continuity + refresh.
[6] WHY THIS MATTERS
[6.1] For Users
Better conversation quality:
- Clearer mental models = better usage patterns
- Understanding when to refresh = less hallucination
- Confidence in persistence = deeper work
- Natural behavior = less cognitive load
[6.2] For Platform Providers
Better product outcomes:
- Increased feature adoption
- Reduced support burden (“how do workspaces work?”)
- Improved user satisfaction
- Competitive differentiation (first to get it right)
[6.3] For the Industry
Setting standards:
- Establishes cognitive framework for persistent AI chat
- Demonstrates value of biological parallels in UX
- Shows how terminology shapes user behavior
- Opens research into auto-refocus patterns
[6.4] The Bigger Picture
We’re in the early days of human-AI collaboration. The interfaces we design now will shape how millions of people think about and use AI.
Getting the mental models right matters.
“Conversation Scroll” isn’t just better terminology – it’s a more accurate model of how conversation works, whether human-to-human or human-to-AI.
[7] CONCLUSION
[7.1] The Core Insight
Persistent AI conversations need periodic refresh, just like human conversations need micro-pauses. The problem isn’t technical capability, it’s user understanding.
By shifting from static metaphors (“Project,” “Folder”) to dynamic ones (“Scroll”), we can teach users the right behavior through language alone.
[7.2] Call to Action
To platform providers:
Try the terminology. A/B test “Scroll” against “Project.” Measure thread creation patterns and conversation quality.
To UX designers:
Consider biological parallels. How do humans naturally handle long interactions? How can we mirror that in digital interfaces?
To users:
Advocate for better mental models. If your AI platform uses confusing terminology, tell them. Better UX benefits everyone.
[7.3] Future Work
Areas worth exploring:
- Optimal thread length before refresh
- Visual representations of conversation scrolls
- Auto-detection of natural break points
- Cross-platform scroll synchronization
- Collaborative scrolls (team contexts)
[7.4] Final Thought
The best interfaces feel natural. They match how humans already think and behave.
“Conversation Scroll” works because it’s not inventing new behavior – it’s naming behavior that already exists in both human conversation and AI interaction.
Sometimes the biggest improvements come from the simplest changes: calling things what they really are.
[8] NEW TERMINOLOGY IMPLEMENTATION
If “Conversation Scroll” feels too different, alternatives include:
- Scroll Folder (blends new + familiar)
- Knowledge Scroll (emphasizes context accumulation)
- Persistent Scroll (highlights continuity)
- Convo Scroll (emphasizes the section relationship)
- Scroll (simplest, most direct)
Recommendation: “Conversation Scroll” or “Scroll Folder” or just “Conversation” balance clarity, accuracy, and metaphor strength.
The terminology is simple: the conversation is a “scroll,” and each chat session is a “thread” within that scroll. This lens shift –
renaming how we talk about the system, requires minimal resources to implement but delivers substantial cognitive impact when adopted.
If you implement these ideas, attribution appreciated.