Published on November 16, 2023, 7:29 pm

Artificial General Intelligence (AGI) is a topic that has generated significant interest and controversy in the tech industry. However, there has been a lack of consensus on what exactly AGI entails. To address this issue, a team at Google DeepMind has published a paper that presents not just one, but a whole taxonomy of definitions for AGI.

Broadly speaking, AGI refers to artificial intelligence that can perform tasks equal to or better than humans across various domains. However, specific criteria such as human-likeness, the range of tasks involved, and the level of proficiency vary among different interpretations. The Google DeepMind researchers aimed to formulate a new definition by examining prominent existing definitions and identifying their essential shared features.

In addition to proposing a revised definition, the team also outlined five levels of AGI: emerging, competent, expert, virtuoso, and superhuman. While emerging AGI includes advanced chatbots like ChatGPT and Bard, no level beyond emerging AGI has been achieved so far. This classification provides valuable clarity on the subject for those who casually use the term without thoroughly considering its implications.

The researchers deliberately released their paper without any fanfare but shared exclusive insights with two team members: Shane Legg (DeepMind’s co-founder and chief AGI scientist) and Meredith Ringel Morris (Google DeepMind’s principal scientist for human and AI interaction). They explained their motivation behind developing these definitions and what they hoped to accomplish through this work.

Legg emphasized the need for sharper definitions because discussions around AGI often involve divergent interpretations that lead to confusion. As AGI gains prominence even in political discourse (highlighted by mentions from the UK prime minister), it becomes imperative to clearly define its scope. Previously, when Legg first coined the term around two decades ago, its vagueness was intentional as it represented an emerging field rather than a concrete concept.

However, due to advancements in generative models and rising hype surrounding AGI, the time has come for more precise definitions. Legg pointed out that AGI must possess two key attributes: general-purpose functionality and high achievement. This separation helps distinguish previous AI systems that excel in one specific task (like IBM’s Deep Blue) from hypothetical AI that can perform multiple tasks proficiently. The breadth of human intelligence surpasses the narrow expertise of specialized AI programs.

Critically, the researchers highlighted that AGI should not only be capable of performing various tasks but should also possess the ability to learn them, evaluate its performance, and seek assistance when needed. Moreover, they emphasized that measuring what an AGI can accomplish holds greater significance than understanding the specifics of how it achieves those outcomes. Morris clarified that although the inner workings of cutting-edge models like large language models are important, our current limited knowledge on these processes necessitates focusing on measurable aspects today.

Evaluating the performance of present-day models is already subject to debate, as researchers question whether passing certain tests signifies true intelligence or merely rote learning. Assessing future models with even greater capabilities will pose further challenges. The research team suggests ongoing evaluation rather than relying solely on isolated assessments if AGI ever materializes.

Another crucial point made by the researchers is that AGI does not inherently imply autonomy. Although there is often an assumption that AGI systems should operate autonomously, it is possible to build super-intelligent machines fully controlled by humans.

Notably absent from this discussion about what constitutes AGI is a consideration of why we should pursue its development. Critics argue against its potential undefined scope and suggest it may not align with well-defined engineering projects. However, despite these concerns, clarity regarding a previously convoluted concept is valuable in moving forward with meaningful discussions and investigations into AGI’s potential applications.

Legg concluded by stating that progressing beyond these definition issues could lead to more substantive explorations and avoid unproductive conversations. AGI represents an exciting frontier, and refining our understanding of it is crucial for future advancements in this field.


Comments are closed.