The fall of the theorem economy (David Bessis)
I found this post from mathematician David Bessis very interesting. It explains that while AI can be trained to prove mathematical theorems in Lean, AI-written proofs are often poor at conveying useful mathematical insights. Bessis argues that the human-usable intuitions gained by the process of proving a theorem is more important than the proof itself, and AI provers do not address this need.It's known that AIs do sloppy work on hard-to-verify tasks, but I was surprised that similar issues arise even in ungameable domains like math. Even when an AI's work is provably correct, its sloppiness may make it much harder for humans to benefit from.I definitely recommend reading the whole post, but I'll pull out a few key quotes here:Mathematics is meant to improve human understandingThe product of mathematics is clarity and understanding. Not theorems, by themselves. Is there, for example any real reason that even such famous results as Fermat’s Last Theorem, or the Poincaré conjecture, really matter? Their real importance is not in their specific statements, but their role in challenging our understanding, presenting challenges that led to mathematical developments that increased our understanding…Mathematics only exists in a living community of mathematicians that spreads understanding and breathes life into ideas both old and new. The real satisfaction from mathematics is in learning from others and sharing with others. All of us have clear understanding of a few things and murky concepts of many more. There is no way to run out of ideas in need of clarification…Lean proofs from AI don't contain useful abstractionsA few weeks ago, Math Inc, one of the best-funded AI-for-math startups, produced a Lean formalization of Maryna Viazovska’s spectacular work on the sphere packing problem in dimensions 8 and 24, results which earned her Fields medal in 2022. That was impressive in its own right: never before had theorems of this caliber been autoformalized. Yet this clear suc