Published on November 21, 2023, 2:12 pm

Generative AI, also known as GenerAI, is rapidly gaining popularity as businesses seek to leverage its capabilities for various purposes. Whether it’s driving internal efficiency and productivity or enhancing external products and services, companies are racing to implement generative AI technologies across different sectors.

One area where generative AI has shown immense potential is in the realm of conversational interfaces. With the introduction of ChatGPT, chatbots have made a comeback, now rebranded as “copilots” and “assistants.” These conversational interfaces serve as orchestrators, helping users effortlessly complete multiple tasks through a free text interface.

The beauty of copilots lies in their adaptability. They can handle an infinite number of possible input prompts, ensuring that all user queries are addressed gracefully and safely. However, when developing a chatbot, it is crucial to start small and tackle one task exceptionally well. Attempting to solve every task at once may lead to falling short of user expectations.

AlphaSense provides an excellent example of this approach. They initially focused on earnings call summarization as their first single task. This particular task was not only well-scoped but also highly valuable for their customer base and aligned with existing workflows in the product. Through the process, they gained valuable insights into LLM development, model selection, training data generation, retrieval augmented generation, and user experience design—a knowledge base that facilitated the expansion into open chat.

As we look ahead to early 2023, OpenAI leads the pack in LLM performance with GPT-4. However, other well-funded competitors like Anthropic and Google are determined to catch up. The open-source community has become instrumental in driving performance improvements while reducing costs and latency. Models such as LLaMA and Mistral offer powerful foundations for innovation.

Major cloud providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure recognize the potential of open source and have adopted a multi-vendor approach. They not only provide support for open-source models but also amplify their capabilities. Although open source may not have matched closed models in published performance benchmarks, it has clearly surpassed them in the trade-offs developers face when taking a product to market.

To aid developers in selecting the most suitable model for their needs, the “5 S’s of Model Selection” can serve as a helpful framework. This framework takes into consideration factors such as size, speed, simplicity, support, and security—the essential elements to consider when making decisions about generative AI models.

As generative AI continues to evolve, its potential applications are expanding rapidly. From vertical search and photo editing to writing assistants and chatbots, businesses are discovering new ways to leverage this revolutionary technology. By starting small, learning along the way, and embracing the power of open source, companies can unlock the full potential of generative AI in their operations while delivering enhanced experiences to their customers.


Comments are closed.