Published on February 1, 2024, 12:15 pm
Artificial intelligence (AI) has made significant advancements in recent years, but there are still limitations to what it can achieve. Scholars at the University of Illinois Urbana-Champaign have found that AI programs, including generative AI programs like ChatGPT, struggle when it comes to handling recursion consistently.
Recursion is a concept where something contains a repetition of itself stretching out infinitely. It is a concept that humans intuitively understand and appreciate. For example, the famous Matryoshka “nesting dolls” from Russia demonstrate recursion, as each wooden doll opens up to reveal a smaller doll inside.
However, according to the researchers at the University of Illinois Urbana-Champaign, AI programs face difficulties in dealing with recursion. This limitation hinders their performance in programming tasks where some code repeats a smaller version of itself.
The researchers conducted tests using large language models (LLMs) like GPT-3.5 Turbo and GPT-4 to examine how generative AI gets stuck when faced with recursion. They used a classic programming task called tree traversal as an example. Tree traversal involves inspecting each part of a tree in a specific order.
As the depth of the tree increased, the performance of large language models declined significantly. The complexity and number of reduction steps required also increased, leading to difficulty in solving the traversal problem. The researchers discovered that language models struggled to carry out the right “reduction,” meaning they couldn’t replace an element in the tree with its recursive element successfully.
This error highlights the challenge faced by LLMs in maintaining algorithmic consistency over extended sequences, especially when adherence to a precise order of operations is necessary.
Not only do LLMs fail to properly solve some traversals, but they also struggle to explain the rules required for successful traversal based on given examples. This inability indicates a lack of logical reasoning on their part.
On one hand, lacking the ability to work with recursion suggests a lack of logical reasoning in AI programs. However, even humans without a strong grasp of formal logic can understand and appreciate recursion in certain contexts.
The researchers propose that the design of the language models needs to be reconsidered. They hypothesize that the models have not been optimized well enough to represent recursive patterns effectively. The experiments conducted revealed the models’ inability to “think recursively” and their tendency to rely on non-recursive rules from data.
It is important to consider how the inability to handle recursion may impact the performance of AI in various tasks, from essay writing to computer programming. While these programs can compensate up to a certain extent by using tricks they learn, the absence of recursion as a fundamental concept is likely to have consequences for their abilities in the long run.
The researchers’ findings emphasize the need for further improvements and optimizations in generative AI programs. As technology continues to evolve, addressing these limitations will be crucial in enhancing the capabilities and reliability of artificial intelligence systems.