Scoopfeeds — Intelligent news, curated.
computer-science

Aperio Lang

Hacker News · May 15, 2026, 5:12 PM

Key takeaways

  • Every language designed before 2023 was optimized for a single tradeoff: minimize friction between human cognitive capacity and machine execution.
  • Most of an LLM’s per-turn effort isn’t recalling syntax.
  • Pick a system you already have a mental model for: the matchmaker behind a multiplayer game.

Every language designed before 2023 was optimized for a single tradeoff: minimize friction between human cognitive capacity and machine execution. Assembly to C to managed runtimes to DSLs were different points on the same line. In an LLM-driven workflow, those languages don’t get cheaper to use — they get more expensive. The cost just hides in the LLM’s token count, its retry rate, and the latency it eats per turn. Pre-LLM languages are a hidden tax in the LLM era.

Most of an LLM’s per-turn effort isn’t recalling syntax. It’s translating between the user’s mental model of a system and the language’s structural shape. A language whose primitives don’t match how the system is thought about forces this translation every turn, paying full cost each time.

Aperio is built on a different premise: there exists a substrate-invariant structural model — a recursive hypergraph of typed, lifecycled units called loci — that both human reasoning and LLM reasoning operationalize when working with systems.1 A language whose primitives are that model collapses the translation layer. The mental model and the code share a substrate.

Article preview — originally published by Hacker News. Full story at the source.
Read full story on Hacker News → More top stories
Aggregated and edited by the Scoop newsroom. We surface news from Hacker News alongside other reporting so you can compare coverage in one place. Editorial policy · Corrections · About Scoop