Claude is Now Alignment Pretrained
Anthropic are now actively using the approach to alignment often called “Alignment Pretraining” or “Safety Pretraining” — using Stochastic Gradient Descent on a large body of natural or synthetic documents showing the AI assistant doing the right thing. They tried this out, ound it works well, and are now using it.I’m absolutely delighted. I’ve been advocating this approach on Less Wrong and the Alignment Forum for several years:How to Control an LLM's Behavior (why my P(DOOM) went down)Motivating Alignment of LLM-Powered Agents: Easy for AGI, Hard for ASI?A "Bitter Lesson" Approach to Aligning AGI and ASIWhy Aligning an LLM is Hard, and How to Make it EasierThe Best Way to Align an LLM: Is Inner Alignment Now a Solved Problem?Motivating Alignment of LLM-Powered Agents: Easy for AGI, Hard for ASI?Pretraining on Aligned AI Data Dramatically Reduces Misalignment—Even After Post-TrainingI’ve been very excited about this alignment technique for a couple of years, ever since I read the seminal paper demonstrating that it was extremely effective, Pretraining Language Models with Human Preferences (Korbak et al., ’23). This was later followed up by Safety Pretraining: Toward the Next Generation of Safe AI (Maini, Goyal, Sam et al., ’25), You Are What You Eat - AI Alignment Requires Understanding How Data Shapes Structure and Generalisation (Lehalleur, Hoogland, Farrugia-Roberts et al., ’25), and most recently Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment (Tice, Radmard, et al., ’26).Others have also posted on LessWrong on this subject, such as TurnTrout’s Self-Fulfilling Misalignment Data Might Be Poisoning Our AI Models, and Beren Millidge’s Alignment In The Age Of Synthetic Data, The case for removing alignment and ML research from the training data and My path to prosaic alignment and open questions. Nostalgebraist discussed something closely related in the void, as did Seth Herd in Broadening the training set for alignment.Anthropic are also