Human-looking robots are a bad idea
I want to share some thoughts on the issues stemming from human-AI interaction and argue that putting those AIs into human-looking robots would make the risks significantly worse.Current risks from advanced AIEvery now and then there is a scandal about some AI chatbots actively influencing people in dangerous ways. Those chatbots sometimes reinforce delusions, convince people to isolate and not seek help, suggest harmful actions, etc.The efforts of companies to deal with those issues have been questionable. I'm not looking to blame any particular company in this post. For illustration, I'll give some examples of things said by various companies:Efforts to diffuse responsibility: children lie about their age, which is an industry-wide challenge and parents should control access to platforms. To deal with that, society will have to figure out new guardrails. Social platform company Meta has lobbied for laws mandating age verification that happens at the device or app-store level.Pressure to increase engagement: some have permitted sensual chats with children and argued that safety restrictions had made the chatbots boring; some have argued that chatbots should stay in character above all (to keep the user happy) and be trusted to make the right call, even if the user has thoughts of self-harm.One company has denied responsibility for causing a suicide, arguing that the teen misused the chatbot.The companies creating those AIs are trying to frame these problems as acceptable risk. I wouldn't attribute this to malice, but to human biases and market forces. My goal is not to advocate current risks are unacceptable or to argue for a specific policy. My goal is to describe the situation and the concerns we need to take into account when making future decisions. I'm not saying that the issues we see with thos