Long-Horizon Human–LLM Interaction
Working with Language Models at Extended Scale
My work focuses on the interactional regimes that emerge in human–LLM interaction when conversations persist across long horizons — thousands of turns and hundreds of thousands of tokens — where stability, drift, and signal coherence become the dominant challenges. This is the scale at which short-session assumptions break down, and where interaction itself becomes an accumulative system.
What I work on
Long-horizon human–LLM interaction
Stability and drift in extended conversations
Interaction-level constraints and continuation reliability
Distinguishing stable from pseudo-stable coherence
Practice-led observation at scale
Tone of Voice drift (detection and causes)
Stabilising interaction after perturbation
Method and scale of work
I work with publicly deployed, safety-constrained LLMs including but not limited to:
ChatGPT, Claude, Gemini, Grok and Mistral.
The scale of my work to date: