Scoopfeeds — Intelligent news, curated.
computer-science

The Other Half of AI Safety

Hacker News · May 14, 2026, 12:27 AM

Key takeaways

  • The low end of that range is the suicide-planning indicator alone.
  • These numbers come from Open AI itself.
  • People in distress use every communication tool available to them, and ChatGPT is now one of the most-used tools on the planet.

Sofia Quintero May 08, 2026Share Every week, somewhere between 1.2 and 3 million Chat GPT users, roughly the population of a small country, show signals of psychosis, mania, suicidal planning, or unhealthy emotional dependence on the model. The low end of that range is the suicide-planning indicator alone. The high end groups all three categories Open AI flagged, which the company hasn’t said are non-overlapping.

These numbers come from Open AI itself. There is no independent audit, no time series, no disclosed methodology, so we have no idea whether the real figure is higher, whether it is growing, or how it compares across the other frontier models, none of which publish equivalent data.

People in distress use every communication tool available to them, and ChatGPT is now one of the most-used tools on the planet. What matters is what the labs do when they detect these states.

Article preview — originally published by Hacker News. Full story at the source.
Read full story on Hacker News → More top stories
Aggregated and edited by the Scoop newsroom. We surface news from Hacker News alongside other reporting so you can compare coverage in one place. Editorial policy · Corrections · About Scoop