Published on February 14, 2024, 8:08 pm

CIOs are facing challenges when it comes to implementing generative AI projects. Many of them are struggling to move these initiatives from the experimental phase to full production, failing to meet their goals from the previous year. In fact, a poll conducted by the Boston Consulting Group revealed that two-thirds of C-suite leaders felt that their organization’s progress with generative AI was not up to par in 2023.

One of the primary reasons for this struggle is that generative AI experiments often fail. Even when pilots do go well, CIOs find it difficult to scale the solutions due to several factors. These include a lack of clarity on success metrics, concerns about cost, and the fast-evolving technology landscape.

Each enterprise has its own personalized risk frameworks, priorities, and policies that determine whether an experiment can progress to broader testing. However, even if all these aspects align perfectly, experiments may still stall due to lackluster results in initial trials.

However, not all failed experiments should be viewed as outright failures. Zillow, for example, began experimenting with various generative AI tools last year. As part of their process, the company established specific expectations for how these tools would work within their ecosystem. They also considered factors such as impacts on user experience and productivity gains when evaluating different solutions.

In the tech industry, there is a strong emphasis on failing fast and learning from mistakes. This approach encourages CIOs and tech leaders not to lower their expectations for generative AI technology but rather adapt plans when experiments stall. This could involve making tweaks or even pulling out of a project entirely.

Brian Jackson from Info-Tech Research Group believes that despite the likelihood of failed attempts along the way, generative AI will be a transformative technology that unlocks new business models. It is essential for organizations not to lower their expectations but instead aim for transformative outcomes.

The reality is that failed generative AI experiments are more common than many enterprises are willing to admit. Ankur Sinha, CTO at Remitly, stated that anyone who claims otherwise is likely not being completely honest. Sinha emphasized the importance of defining metrics of success when embarking on a project. This helps organizations identify the value that a tool or solution will deliver.

At Remitly, teams are currently experimenting with AI-powered coding companions and test-generation tools. One of the key metrics they track is the change rate for generated code. If the change rate is too high or the percentage of code making it into production is low, then it may not be worth pursuing. To mitigate risks associated with introducing security vulnerabilities through AI code, generated code is reviewed by automated quality gates and human reviewers.

While some generative AI experiments may fail due to missing ROI metrics or guardrails, others can lead to immediate gains. It’s up to businesses to determine which concepts have the most potential for success as the technology continues to evolve.

In summary, CIOs should not lower their expectations for generative AI despite facing challenges in moving projects from experiment to production. By adapting plans and learning from failed attempts, organizations can leverage generative AI as a transformative technology that unlocks new business models and drives innovation in various industries.

Share.

Comments are closed.