Published on January 16, 2024, 7:28 am

The debate surrounding how to avoid or prevent hallucinations in Artificial Intelligence (AI) is gaining momentum. Generative AI models are designed and trained to produce hallucinations, which means that hallucinations are a common outcome of any generative model. However, instead of trying to prevent these hallucinations, we should focus on designing AI systems that have the ability to control them.

Before we delve into creating a better-performing AI system, let’s establish a few definitions. A generative AI model, also known as a “generative model,” is a mathematical concept implemented through computational procedures. It can synthesize data that closely resembles the statistical properties of a given training dataset without replicating any specific data within it. The aim of a generative model is to generate data that is realistic and distributionally equivalent to the training data but differ from the actual data used for training. Large Language Models (LLMs) and conversational interfaces such as AI bots are just one part of a more complex AI system.

Hallucination refers to synthetic data that may be different from or inconsistent with real-world data but still appears realistic. In other words, hallucinations are produced by generative models. While they may seem plausible, they do not possess knowledge of facts or truth even if those elements were present in the training data.

It is important for business leaders to understand that generative models should not be considered sources of truth or factual knowledge. Though they might provide correct answers for some questions, their primary purpose is not intended for this role. Using them as such would be similar to using a racehorse for hauling cargo – possible but not their true purpose. Generative AI models are typically used as components alongside other elements like data processing, orchestration, access to databases, or knowledge graphs (known as retrieval-augmented generation). While these models may cause hallucinations, AI systems can be designed to identify and mitigate undesirable effects of these hallucinations (e.g., in retrieval or search tasks) while enabling them when desired (e.g., for creative content generation).

Moving from research to practical implementation, LLMs are a type of stochastic dynamical system. The concept of controllability in such systems has been studied for decades in the field of control and dynamical systems. Recent research, including a preprint paper, has shown that LLMs can be controlled. This means that with proper design, an AI system can manage and control hallucinations effectively. It is fascinating to think that concepts I encountered during my doctoral studies more than 25 years ago are now being applied to chatbots trained on web text.

At Amazon, researchers and experts in AI, machine learning, and data science are working diligently to apply scientific advancements to real-world problems on a large scale. With the realization that AI bots can indeed be controlled, our focus at Amazon is on designing systems that have this control capability. One example of our innovation is Amazon Kendra – a service that reduces hallucination issues by augmenting LLMs to provide accurate and verifiable information to end-users. We aim to rapidly put this technology into the hands of customers through collaboration between our research and product teams.

When considering the need to control hallucinations in AI systems, there are several key factors to keep in mind. It is crucial to acknowledge that we are still in the early stages of generative AI development. Hallucinations can be a controversial topic, but recognizing their controllability marks an important step toward leveraging this technology more effectively as it continues to shape our world.

About the Author:
Stefano Soatto is a Professor of Computer Science at the University of California, Los Angeles (UCLA), and also serves as Vice President at Amazon Web Services (AWS), leading the AI Labs department. He obtained his Ph.D. in Control and Dynamical Systems from the prestigious California Institute of Technology in 1996. Soatto has held various academic positions at institutions such as Washington University, the University of Udine, and Harvard University. His interdisciplinary background includes studies in classics, music (as a jazz fusion musician), skiing, and rowing for the Italian National Rowing Team.

At Amazon, Soatto’s responsibilities involve research and development leading to products like Amazon Kendra (search), Amazon Lex (conversational bots), Amazon Personalize (recommendation), Amazon Textract (document analysis), Amazon Rekognition (computer vision), Amazon Transcribe (speech recognition), Amazon Forecast (time series prediction), Amazon CodeWhisperer (code generation), as well as recently introduced services like Amazon Bedrock (Foundational Models as a service) and Titan (GenAI). Prior to joining AWS, he served as Senior Advisor of NuTonomy, pioneers of Singapore’s first autonomous taxi service. He also acted as a consultant for Qualcomm’s AR/VR efforts. During the DARPA Grand Challenge in 2004-5, he co-led the UCLA/Golem Team with Emilio Frazzoli and Amnon Shashua

Share.

Comments are closed.