Published on October 19, 2023, 6:21 pm

TLDR: California is leading the way in leveraging artificial intelligence (AI) within its state government. Governor Gavin Newsom has signed an executive order requiring state agencies to assess the risks and potential impacts of AI implementation. The focus is on using generative AI to enhance the customer experience for residents who rely on public services. California plans to establish AI standards for governments across the country and will publish procurement guidelines and create a testing environment for agencies. The goal is to strike a balance between exploring new horizons and mitigating potential risks. Transparency, cybersecurity, and responsible implementation are key priorities in this process.

The Future of State, Local, and Higher ED IT

California is taking a major step forward in leveraging the power of artificial intelligence (AI) within its state government. Governor Gavin Newsom recently signed an executive order requiring state agencies to assess the risks associated with AI implementation and how it could impact their work, the state’s economy, and energy usage. This move towards embracing AI technology reflects California’s commitment to innovation and finding ways to improve public services.

Amy Tong, Secretary of the California Government Operations Agency, is leading the efforts to develop recommendations for new policies and regulations surrounding AI. Tong emphasizes that this is not just a task force but rather a collaboration between public and private entities working together towards a common goal. The primary focus of this collective effort is exploring how generative AI can enhance the customer experience for residents who rely on public services.

With California being the most populous state in the nation and boasting one of the largest economies worldwide, it comes as no surprise that they are at the forefront of establishing AI standards for governments across the country. Liana Bailey-Crimmins, California’s Chief Information Officer and member of the team responsible for implementing generative AI, explains that her office plans to publish procurement guidelines next month. They will also create a controlled testing environment known as a “sandbox” where agencies can explore new technologies before officially adopting them.

Generative AI tools are designed using deep-learning models that analyze enormous amounts of data in order to generate text, images, and other content that resembles human-generated output. States nationwide are currently grappling with how best to utilize this rapidly advancing technology within their operations. However, Tong acknowledges that harnessing these benefits requires an understanding of potential risks so that both policy developers and program implementers can effectively deploy generative AI.

Bailey-Crimmins stresses the importance of thoroughness in producing next month’s report. By ensuring comprehensive coverage of all key aspects relating to generative AI adoption, public services will be safeguarded, and the successful integration of new technologies will be secured. Preventing any unforeseen pitfalls is crucial to avoid situations where projects are unable to adapt and evolve based on the needs of constituents.

In addition, Tong and her team will recommend pilot programs that can serve as test grounds for generative AI efficacy. These programs will focus on procurement guidelines and staff training to ensure that the government workforce remains up-to-date and capable of leveraging this transformative technology. The middle ground approach is meant to strike a balance between exploring new horizons and mitigating potential risks.

Both Tong and Bailey-Crimmins emphasize the importance of transparency in this process. Implementing generative AI brings about changes in how government services are offered. To foster trust among residents, it is crucial to keep them informed about the nature of these changes. Maintaining a clear line of communication is essential when governments delve into uncharted territories such as AI-driven solutions.

Moreover, cybersecurity risks associated with generative AI must also be taken into account. Bailey-Crimmins highlights that while there are already existing cybersecurity risks tied to traditional AI systems, generative AI introduces additional concerns that need addressing. Establishing appropriate terms and conditions with vendors becomes paramount in holding them accountable for security breaches or other potential issues that may arise.

Governor Newsom’s executive order sets specific deadlines for various deliverables, such as reports, employee trainings, pilot programs, extending until January 2025. Despite the sense of urgency surrounding generative AI implementation among state governments across the country, Bailey-Crimmins remains focused on maintaining a long-term vision rather than rushing through projects without careful consideration.

The move by California to embrace generative AI within its state government represents an exciting step forward in harnessing the power of artificial intelligence. As the state develops policies and regulations around its use, it not only aims to enhance customer experiences but also emphasizes transparency, trustworthiness, and cybersecurity. By being meticulous and forward-thinking in their approach, California is demonstrating its commitment to responsible AI implementation that will benefit both the government and its constituents.

Source: StateScoop Special Report.

Share.

Comments are closed.