Published on January 16, 2024, 5:09 pm
Singapore has unveiled a draft governance framework for generative artificial intelligence (GenAI) to address emerging issues related to incident reporting and content provenance. This proposed model builds upon the country’s existing AI governance framework, initially released in 2019 and updated in 2020.
The potential of GenAI to revolutionize traditional AI is significant, but it also brings risks, as stated by the AI Verify Foundation and Infocomm Media Development Authority (IMDA) in a joint statement. Recognizing the need for consistent principles globally, Singapore government agencies emphasized the importance of creating an environment where GenAI can be used safely and with confidence. They further added that the use and impact of AI extend beyond individual countries.
The draft document incorporates proposals from a discussion paper released by IMDA last year, which identified six risks associated with GenAI, including hallucinations, copyright challenges, and embedded biases. It also outlines a framework on how these risks can be addressed. The proposed GenAI governance framework draws insights from previous initiatives such as evaluating the safety of GenAI models and testing them in an evaluation sandbox.
Singapore’s draft framework for GenAI governance covers nine key areas deemed crucial for supporting a trusted AI ecosystem. These areas revolve around principles that require AI-powered decisions to be explainable, transparent, and fair. The framework also offers practical suggestions that both AI model developers and policymakers can implement as initial steps.
Content provenance is one of the critical components outlined in the draft framework. Singapore emphasizes the necessity of transparency regarding the generation of online content so consumers can assess its credibility. With easily created AI-generated content like deepfakes exacerbating misinformation, other governments are also exploring technical solutions such as digital watermarking and cryptographic provenance to address this issue. The draft suggests working with publishers to support embedding and displaying digital watermarks along with additional provenance details securely.
Another crucial component highlighted by Singapore’s draft framework focuses on security concerns surrounding GenAI. The framework acknowledges the new risks brought by GenAI, including potential attacks and data breaches. It recommends refining security-by-design concepts applied in the system development lifecycle to address challenges associated with injecting natural language as input. Furthermore, the probabilistic nature of GenAI may require new approaches to traditional evaluation techniques for system refinement and risk mitigation.
Recognizing the necessity of striking a balance between user protection and innovation, the Singapore government agencies stress that addressing accountability, copyright, misinformation, and other related topics requires a practical and holistic approach. To establish international consensus, Singapore is collaborating with governments such as the United States to align their respective AI governance frameworks.
Feedback on Singapore’s draft GenAI governance framework is actively welcomed until March 15th.
In conclusion, Singapore’s release of a draft governance framework on generative artificial intelligence (GenAI) reflects its commitment to address emerging issues related to AI technology. By building upon their existing AI governance framework and incorporating insights from previous initiatives, Singapore aims to create an environment where GenAI can be used safely and confidently. The proposed framework covers essential areas such as content provenance and security while emphasizing the need for practical implementation and international collaboration in shaping AI governance.