The Other Half of AI Safety
Key takeaways
- The low end of that range is the suicide-planning indicator alone.
- These numbers come from Open AI itself.
- People in distress use every communication tool available to them, and ChatGPT is now one of the most-used tools on the planet.
Sofia Quintero May 08, 2026Share Every week, somewhere between 1.2 and 3 million Chat GPT users, roughly the population of a small country, show signals of psychosis, mania, suicidal planning, or unhealthy emotional dependence on the model. The low end of that range is the suicide-planning indicator alone. The high end groups all three categories Open AI flagged, which the company hasn’t said are non-overlapping.
These numbers come from Open AI itself. There is no independent audit, no time series, no disclosed methodology, so we have no idea whether the real figure is higher, whether it is growing, or how it compares across the other frontier models, none of which publish equivalent data.
People in distress use every communication tool available to them, and ChatGPT is now one of the most-used tools on the planet. What matters is what the labs do when they detect these states.