Published on November 30, 2023, 7:55 am
Generative AI: Overcoming Challenges to Adoption
The public release of ChatGPT one year ago sparked a race among businesses to incorporate generative AI into their workflows. However, despite the widespread adoption, CIOs still face obstacles in fully implementing the technology.
Many companies saw the potential of generative AI and adjusted their product roadmaps and talent priorities accordingly. According to SoftBank data, nearly one-third of CTOs changed their strategies due to generative AI. And according to a PwC survey, more than half of executives reported implementing generative AI to some extent within their organizations.
As companies began experimenting with generative AI, concerns emerged regarding costs, copyright issues, and data protection. Transparency surrounding AI systems also became an important topic of public discussion. Nevertheless, businesses were determined to reap the benefits of generative AI adoption.
While generative AI and its underlying large language models are becoming common in enterprise tech stacks, there are still challenges that CIOs must navigate carefully. It is crucial for leaders to understand when generative AI may not be the best tool for certain tasks.
One of the main hurdles is that most foundational models require large data sets that many businesses do not have access to. Additionally, according to a report from Stanford University, MIT, and Princeton University, the transparency of major foundational model developers has room for improvement. The researchers scored models based on the transparency of resources used for building the model as well as downstream use. While some models received decent scores (e.g., Meta’s Llama 2 with 54 out of 100), others fared poorly (e.g., Amazon’s Titan with only 12%).
Technology leaders may be familiar with working with vendors but may not know what questions to ask about model details and limitations. This lack of clarity makes it essential for CIOs to tread carefully when adopting these technologies.
Despite these challenges, businesses continue to push forward. However, promoting responsible and ethical AI practices is crucial for maintaining customer trust and avoiding negative consequences. A QuantumBlack survey revealed that while the majority of businesses were not actively working to mitigate risks associated with AI, those that were took steps such as implementing acceptable use policies and providing training opportunities.
To ensure responsible AI development and use, some companies have formed compliance and cybersecurity teams dedicated to understanding the evolving landscape. Principal Financial Group, for example, has established a team of experts to refine their policy on AI and support responsible exploration.
Vendors themselves have also responded to concerns around generative AI adoption. OpenAI has added security guardrails and privacy options to its chatbot ChatGPT, while Microsoft and Google have indicated their willingness to assist customers facing legal risks related to their products and services.
Looking ahead, more business leaders plan on investing in responsible AI by 2024 than they did in 2023 according to AWS research conducted by Morning Consult. Many recognize the importance of fair, accurate, secure, safe, transparent, and inclusive AI practices.
In conclusion, despite the hype surrounding generative AI, businesses are still in the early stages of implementation. It is crucial for organizations to develop strategies that align with their unique needs rather than adopting generative AI simply because it’s trending. Leaders must be cautious about the challenges associated with data availability and model transparency. By prioritizing responsibility and taking into account potential risks, businesses can navigate the path towards successful generative AI adoption.