Published on November 17, 2023, 2:04 am

The Innovativeness Gap: Human Children Outperform Ai In Problem-Solving, Study Finds

A recent study conducted by researchers at the University of California, Berkeley has found that human children outperform AI tools in basic problem-solving and thinking tasks. The study, published in the journal Perspectives on Psychological Science, reveals that AI has a blind spot when it comes to innovation.

AI tools are trained on vast amounts of human-created data and excel at predictive, statistics-centered tasks. However, when it comes to truly novel ideas and inventive thinking, they fall short. While AI is good at imitating humans, it lacks the ability to produce fresh and imaginative solutions.

The study focused on tool use and innovation as a means to test problem-solving skills. Humans have the ability to design new tools from scratch or use existing tools in unconventional ways. The researchers presented several AI models, including OpenAI’s GPT-4 and GPT-3.5 Turbo, Anthropic’s Claude, and Google’s FLAN-T5, along with a group of children aged three to seven with a series of problems where a goal had to be executed without the typical tool.

In one example, participants were asked to draw a circle using unconventional objects such as a ruler, teapot, or stove instead of a conventional circle-drawing tool like a compass or stencil. The study found that 85 percent of the time, children chose correctly by using the teapot as a makeshift stencil. The AIs predominantly reached for the ruler as it was the only object conventionally associated with drawing shapes.

The AIs also struggled with inferring novel causal structures and discovering cause-and-effect relationships to achieve certain goals. In contrast, even four-year-old children were able to make studied observations about an event and infer cause-and-effect relationships.

While it’s challenging to compare human cognition against AI due to the lack of widely agreed-upon definitions for “intelligence,” this research highlights that there are inherent differences between AI reasoning processes and those of human beings.

As we continue to advance in the AI era, understanding these differences becomes crucial in determining how and where to use AI effectively. The study concludes that children’s curiosity, active involvement, self-supervision, and intrinsic motivation contribute to their learning algorithms in ways that differ from the large language and language-and-vision models used in AI systems.

In summary, this study emphasizes that while AI has its strengths, such as data-driven predictions, it still has a long way to go before it can match the innovative problem-solving skills of human children.


Comments are closed.