This AI knew the answers but didn’t understand the questions
Key takeaways
- Psychologists have long debated whether the human mind can be explained by a single, unified theory or if different functions such as attention and memory must be studied separately.
- It reportedly performed well across 160 tasks, including decision-making, executive control, and other mental processes.
- A more recent study published in National Science Open challenges those claims.
Why this matters: new research or scientific developments with potential real-world impact.
Psychologists have long debated whether the human mind can be explained by a single, unified theory or if different functions such as attention and memory must be studied separately. Now, artificial intelligence (AI) is entering that debate, offering a new way to explore how the mind works.
In July 2025, a study published in Nature introduced an AI model called "Centaur." Built on standard large language models and refined using data from psychological experiments, Centaur was designed to simulate human cognitive behavior. It reportedly performed well across 160 tasks, including decision-making, executive control, and other mental processes. The results drew widespread attention and were seen as a possible step toward AI systems that could replicate human thinking more broadly.
A more recent study published in National Science Open challenges those claims. Researchers from Zhejiang University argue that Centaur's apparent success may come from overfitting. In other words, instead of understanding the tasks, the model may have learned to recognize patterns in the training data and reproduce expected answers.