What Context Means in the Era of Account-Level Context Persistence

On 5 March 2026, OpenAI released its newest model, GPT-5.4 Thinking. Along with the release, the system messages for existing models were updated. Finally, all the models were made aware that context was being shared account-wise and that indexed information was being stored about the user, outside of the visible account-level memory.

This should come as no surprise to anyone who enabled context sharing in their settings. However, you no longer have to be coy about asking the model what it knows about you. GPT-5.4 will write a whole user biography if you ask it. In a brand new chat. Without any privacy disclaimers. What you shared in any persistent chat is now fair game for retrieval. And that changes things.

For a long time, "context" in human–LLM interaction meant something fairly simple: whatever fit inside the active conversation window. If a model could remember the last few turns, carry forward your instructions, and not immediately lose the plot, that already felt like progress.

But that definition is becoming too thin.

As account-level context persistence improves, models are increasingly able to retrieve information from beyond the current chat: project history, preferences, prior discussions, recurring themes, even details that would once have been lost the moment a conversation ended. This changes the interaction landscape in an important way. The question is no longer just whether a system can remember context. It's whether it can reason with it effectively.

Here's where it gets interesting.

A system that remembers my cat's name is demonstrating context competence. It has successfully retrieved a stored fact and reintroduced it at the right moment. Useful? Yes. Impressive? Sometimes. But this is still a fairly basic form of continuity.

A system that understands why I wrote a ragdoll cat into a novel, and can bring that understanding into a discussion of characterisation, symbolism, tone, or emotional continuity, is doing something more demanding. It isn't simply recalling context. It's using context in a way that is relevant to the current problem and faithful to the larger structure of the work.

That is closer to what I would call context fidelity.

The difference between the two may become one of the central questions in the next phase of human–LLM collaboration. Context competence asks: can the system retrieve and deploy prior information? Context fidelity asks: does it use that information in a way that remains true to the deeper causal, conceptual, and normative logic of the interaction?

They're related, but they're not equivalent.

A model may become very good at weaving remembered details into a response. It may sound continuous. It may appear highly personalised. It may even give the impression of deep understanding. But if the retrieved context is being applied too glibly, too decoratively, or without sensitivity to what actually matters in the project, then what we are seeing isn't full coherence. It is polished recall.

This is why stronger memory alone doesn't eliminate the need for richer frameworks such as High-Coherence Interaction States (HCIS) or methods like Manifold Prompting. If anything, it makes them more necessary. Once context retrieval improves, superficial markers of continuity become easier to simulate. The real challenge shifts from context persistence to constraint persistence: whether the standards established across the interaction remain behaviourally active, not just whether the model has dutifully logged them.

Does the system remember how uncertainty should be handled? Does it preserve truth norms under pressure? Does it distinguish between context that's relevant and context that's merely available? Can it reason from accumulated material rather than merely display it?

This is the frontier.

In the era of account-level memory, "context" can no longer mean stored background alone. It has to mean an active, structured field of relevance: information that isn't only retrievable, but interpretable, usable, and normatively organised.

The future of context isn't memory theatre. It's reasoning in context.

And that is a far more interesting standard.

Next
Next

What Comes After Defining HCIS: Current Research Directions