Scoopfeeds — Intelligent news, curated.
Open strategic questions for digital minds
agentic-ai

Open strategic questions for digital minds

LessWrong · May 1, 2026, 9:56 AM

These are strategic questions about digital minds and AI welfare that I think are especially important, and where I’d like to see more progress. A common theme is that they matter for what we should do concretely under uncertainty about AI moral status.This is a current snapshot of my views and I expect them to change.What do you think? Any questions you’d add?Approach What’s robustly good to do now, under deep uncertainty?I think this is the leading question we should ask. We don’t know whether AIs are or will become moral patients, and resolving that question isn’t tractable in the short term. What matters most are the long-run effects of our actions, since the vast majority of digital minds, if they ever exist, will be created after the transition to advanced AI. And there are serious long-run risks from both over- and under-attributing moral status.So we should look for actions that are robustly positive in the long run: good if AIs are (or will be) moral patients, not bad if they aren’t, and compatible with human and animal welfare (~AI safety). Finding such actions is hard, and most options carry risk, including bad lock-ins.Can AI welfare work wait for ASI?Given how seemingly intractable the questions around AI consciousness and moral status are, it’s tempting to punt them to the future and let superintelligent AI solve them for us. On this view, what matters most for long-run welfare of all moral patients is successfully navigating the transition to a world with ASI, and ASI can take it from there.I think this is partly right, and I recommend Oscar Delanay’s nuanced post on this issue. But I suspect that there is still a lot we should think about and do with respect to AI welfare before ASI, especially on governance and strategy: setting up robust legal frameworks, avoiding harmful lock-ins of institutions, values, and technical systems, shaping norms that support good long-run outcomes. A useful overarching goal is to increase the likelihood that the people,

Article preview — originally published by LessWrong. Full story at the source.
Read full story on LessWrong → More top stories
Aggregated and edited by the Scoop newsroom. We surface news from LessWrong alongside other reporting so you can compare coverage in one place. Editorial policy · Corrections · About Scoop