Scoopfeeds — Intelligent news, curated.
Signaling and Perverse Adoption of Expensive AI
agentic-ai

Signaling and Perverse Adoption of Expensive AI

LessWrong · May 12, 2026, 2:34 PM

This post is crossposted from my Substack, Structure and Guarantees, where I explore how formal verification and related ideas might scale to more complex intelligent systems. Here I argue that modern AI ecosystems are shaped not only by engineering trade-offs but also by signaling incentives grounded in evolutionary psychology. Engineers build status by solving socially legible hard problems, while organizations build status through socially legible expensive deployments. The result is a subtle pressure toward AI approaches that are unusually effective as prestige signals, even when alternative system designs could produce solutions that are cheaper, faster, or more reliable.I’ve been making the case that today’s popular style of generative AI is fundamentally slow and unreliable. To unlock applicability of other styles (most importantly those based on symbolic reasoning and proof), I’ve also argued for a different sort of learning loop, where we go beyond repeatedly improving a model based on mistakes it makes, also making strategic changes to the world and the problem formulation, tending toward simplifying AI challenges. Examples include simplifying or avoiding natural-language processing (e.g. by using computer-oriented communication methods), computer vision (e.g. by rearranging environments to be more geometrically regular), and challenging programming tasks (e.g. by adopting better programming languages).This perspective is so far from the mainstream that it isn’t even on a menu of possible positions that folks are used to being prepared to argue with. One natural hypothesis is that success in the real world fundamentally requires solving familiar hard AI problems, so we really need to stick with techniques like deep learning that have the best track record solving those problems. That hypothesis is compatible with an engineering mindset, where in building better artifacts, we may need to trade off between dimensions like generality, speed, and reliability –

Article preview — originally published by LessWrong. Full story at the source.
Read full story on LessWrong → More top stories
Aggregated and edited by the Scoop newsroom. We surface news from LessWrong alongside other reporting so you can compare coverage in one place. Editorial policy · Corrections · About Scoop