Published on November 9, 2023, 7:06 am
Google Deepmind introduces a new framework called “Levels of AGI” to classify the capabilities and behaviors of artificial general intelligence (AGI) and its precursors. This framework aims to provide a common language for comparing models, assessing risks, and measuring progress in the field of AI.
The “Levels of AGI” framework is based on two dimensions: depth (performance) and breadth (generality) of capabilities. By categorizing AGI systems into different levels, researchers can better understand how current systems fit into the overall landscape of AGI development.
The paper acknowledges that AGI has transitioned from a philosophical concept to a practical reality due to rapid advancements in machine learning models. As such, it is crucial for the AI research community to define the term “AGI” and quantify attributes such as performance, generality, and autonomy.
In the paper, several well-known definitions of AGI are discussed and critiqued. These include the Turing Test proposed by Alan Turing, philosopher John Searle’s suggestion of including consciousness in AGI systems, and Mark Gubrud’s definition based on analogies to the human brain. The authors argue that these definitions have limitations or are not comprehensive enough when it comes to capturing the essence of AGI.
To address this challenge, Google Deepmind proposes six principles for categorizing AGI systems:
1. Focus on capabilities rather than processes: The emphasis is on what an AGI system can do rather than how it achieves it.
2. Focus on generality and performance: Both breadth (generality) and depth (performance) are essential components of AGI.
3. Focus on cognitive and metacognitive tasks: While debates about robotic embodiment continue, most definitions focus on cognitive tasks. Metacognitive abilities such as learning new tasks are considered vital for achieving generality.
4. Focus on potential rather than deployment: Demonstrating an AGI system’s capability should be sufficient to classify it as AGI, regardless of its real-world deployment.
5. Focus on ecological validity: Tasks that correspond to real-world applications and are valued by humans should be used to assess AGI.
6. Focus on the path to AGI, not a single endpoint: Defining stages of AGI progress allows for a clearer understanding of the development and policy implications.
The researchers also propose a five-level scale for categorizing AGI systems, with level 1 being capable language models like GPT-4 and level 5 representing fully autonomous AI agents. Each level of autonomy creates new human-computer interactions and poses new risks.
While developing an AGI benchmark is challenging, Google Deepmind highlights the need for a wide range of cognitive and metacognitive tasks to measure different traits associated with AGI. These tasks may include verbal intelligence, logical reasoning, social intelligence, and creativity. The benchmark should evolve over time to account for the complexity and progress in AGI development.
Overall, the “Levels of AGI” framework proposed by Google Deepmind provides an important step toward standardization in defining and categorizing AGI systems. It offers a comprehensive approach that can guide the development of AI systems and address potential risks along the path to AGI.