Scoopfeeds — Intelligent news, curated.
Best Intro AI X-Risk Resource?
agentic-ai

Best Intro AI X-Risk Resource?

LessWrong · May 10, 2026, 11:03 AM · Also reported by 1 other source

I'd like the best short article and video intro explainers, shooting for the 15 minute range. At least one of the articles shouldn't be on Less Wrong, because some will get turned off by this forum. It should be simple and not require prerequisite knowledge. My parents, and ideally my grandparents, should be able to understand it. Failing that, a normal college student at an average university should be able to; or at least a STEM major.It should have links to more details, in case someone's interested. There are smart 13 year olds who will gladly read a million words and then have their lives changed, if there are enticing links - I was one of them, many of you probably were too. The Sequences and HPMOR are good on the rationality front, but I'd like an AI X-risk intro with more focused links. Lastly: I don't mean to be presumptious, but if I were running LessWrong I would pin the best couple intros to the sidebar, or something similar. It needs to be really easy for someone who randomly followed a link to this "LessWrong" thing and has no clue what the hell all this is to click the "Why AI Will Kill Everyone" button, and then read or watch what's linked. Sometimes, there's a fitting moment to link an outsider to a short, simple explanation of the basic arguments for AI x-risk. I don't know what to link them to! AGI Ruin: List of Lethalities is not a good intro. IABIED would be great, except it's a whole book. Maybe AGI Safety From First Principles? I haven't yet read through it, so I don't know if it'd fit.Video-wise, Rob Miles has an old Intro to AI Safety video. My memories of it suggest it's not great as an intro, even though I found his other videos excellent for introducing specific topics (e.g. he's how I originally heard about quantilizers and the inner-outer misalignment distinction). I'll plan to review it later, along with the first principles sequence.AI2027 is mostly about forecasting. It's also too detailed for the kind of intro I'm looking for, though

Article preview — originally published by LessWrong. Full story at the source.
Read full story on LessWrong → More top stories

Also covered by

Aggregated and edited by the Scoop newsroom. We surface news from LessWrong alongside other reporting so you can compare coverage in one place. Editorial policy · Corrections · About Scoop