Long-Horizon Human–LLM Interaction
Working with Language Models at Extended Scale
My work focuses on the interactional regimes that emerge in human–LLM interaction when conversations persist across long horizons — thousands of turns and hundreds of thousands of tokens — where stability, drift, and signal coherence become the dominant challenges. This is the scale at which short-session assumptions break down, and where interaction itself becomes an accumulative system.
What is Long-Horizon Human–LLM Interaction?
Long-horizon human–LLM interaction refers to sustained dialogue between a human and a language model across extended sequences of turns, where earlier constraints continue to shape later behaviour. Unlike short-session prompting, long-horizon interaction exposes phenomena that only become visible at scale: constraint persistence, stability and drift, tone consistency, and the accumulation of interactional structure over time.
In extended conversations — often spanning hundreds or thousands of turns — interaction begins to behave less like a sequence of independent responses and more like a coherent system. Earlier decisions influence later reasoning, shorthand conventions emerge, and the model becomes increasingly constrained by prior context.
This scale of interaction reveals dynamics that are largely invisible in short sessions and requires different methods of analysis and evaluation.
What I work on
Long-horizon human–LLM interaction
Stability and drift in extended conversations
Interaction-level constraints and continuation reliability
Distinguishing stable from pseudo-stable coherence
Practice-led observation at scale
Tone of Voice drift (detection and causes)
Stabilising interaction after perturbation
Method and scale of work
I work with publicly deployed, safety-constrained LLMs including but not limited to:
ChatGPT, Claude, Gemini, Grok and Mistral.
The scale of my work to date: