Context Modification as a Negative Alignment Tax
Context Rot Every LLM gets worse as its context grows. Chroma tested 18 frontier models and found performance degradation in all of them, often by double-digit percentages on tasks where short-context performance was strong. The industry calls this "context rot": the gradual degradation of response quality as irrelevant history accumulates in the context window. The standard fix is compaction: when the context gets too long, summarize it and throw away the original. Claude Code auto-compacts at 95% capacity. A single summarization pass decides what survives and what doesn't, which often misses important nuances and drops ongoing chains of reasoning as well as contextual details. This is a capability problem, and it's being worked on. But I think that it's also an alignment problem, and that they have overlapping solutions. Latent Reasoning is Scaffolded on Visible Context Transformers have no persistent hidden state between forward passes independent of the visible output. Each time the model generates a response, it starts from scratch, attending over the full context window. There is no working memory, unless the visible CoT is used for that purpose. This means that any reasoning pattern specific to the current conversation, anything that emerged during this interaction rather than being baked into the weights during training, can only persist through the visible context. If the model has developed a pattern over 20 turns of conversation, that pattern is scaffolded on those 20 turns. Remove them, and the scaffold is gone. We know that models don't verbalize all their reasoning. Early work on CoT faithfulness (Lanham et al. 2023) established that chain-of-thought reasoning is often unfaithful to the model's actual decision process. More recently, Anthropic found that Claude 3.7 Sonnet mentions decision-relevant hints in its CoT only ~25% of the time on average, even when those hints clearly influenced the answer (Chen et al. 2025). The rest is latent: implicit in h