Published on July 4, 2024, 4:35 am

The software company Planview, based in Austin, began utilizing generative artificial intelligence (AI) to enhance productivity around 18 months ago. Over the same period, they started integrating generative AI into their products, creating a co-pilot with which users interact to carry out strategic portfolio management and value stream management. This co-pilot generates planning scenarios to assist managers in achieving product launch goals and suggests ways to move deliverables along roadmaps, share work among teams, and reallocate investment.

As pioneers in this field, Planview quickly realized that to fully leverage AI, they needed to establish policies and governance covering both their internal operations and enhancements to their product offerings. Drawing from the company’s experience and insights from other chief information officers, four key lessons can be derived to aid organizations in developing their approach to AI governance.

AI governance is not fundamentally different from any other form of governance. According to Mik Kersten, CTO of Planview, since most AI policy relates to data, it should be straightforward to leverage existing frameworks. Planview took guidelines they were already using for open source and the cloud and tailored them to suit their AI governance needs.

On a different note, Florida State University (FSU), a vastly different organization compared to Planview, developed its AI governance based on existing IT governance advice that convenes periodically to prioritize investments and risks. “We classify investments financially as well as in terms of value and impact across the entire campus,” notes Jonathan Fozard, CIO of the university. The FSU’s use cases range from scientific research to office productivity and include teaching AI as part of curricula across various fields where students are likely to engage with AI upon entering the workforce.

Focusing on Wall Street English, an international English language academy headquartered in Hong Kong, they devised their own AI stack to master a technology deemed essential for their business operations. “We strive for quicker innovation, better outcomes, and a range of customized solutions that perfectly fit the needs of students and teachers,” states Roberto Hortal, the company’s Chief Product and Technology Officer. They maintain a proactive approach as part of their policy by staying abreast of the latest advancements, best practices, and potential risks.

Incorporating AI into self-learning programs is one way Wall Street English leverages this technology. They use it for voice recognition to provide feedback on pronunciation as well as a basis for conversation agents enabling students to practice conversational skills by simulating real-life scenarios.

Each organization tailors its AI governance framework based on what needs safeguarding most—be it intellectual property or cultural sensitivities—to ensure compliance internally as well as with subcontractors handling work externally. Establishing clear policies on building versus purchasing services or delineating usage of open-source code contributes significantly towards maintaining control over data integrity.

Ultimately, early adoption of robust AI governance aids in aligning organizational goals with technological advancements while safeguarding against potential risks associated with rapid implementation. By establishing coherent policies upfront regarding acceptable risk levels concerning AI applications within an organization sets forth a solid foundation for future innovations without compromising data security or customer trust.

In conclusion…


Comments are closed.