Published on November 16, 2023, 4:16 pm
Many large companies today recognize the importance of artificial intelligence (AI) for their future and are actively incorporating AI applications into various parts of their businesses. However, they also acknowledge that AI has ethical implications, and it is crucial to ensure that the AI systems they develop and implement are transparent, unbiased, and fair.
While many companies understand the significance of ethical AI, most are still in the early stages of addressing it. Some may have encouraged their employees to adopt an ethical approach to AI development and use or drafted a preliminary set of AI governance policies. However, according to a recent survey, only 6% of U.S. senior leaders have actually developed ethical AI guidelines despite 73% believing that they are important.
The process of integrating ethics into AI can be broken down into five stages: evangelism, policy development, recording, review, and action. During the evangelism stage, representatives from the company emphasize the importance of AI ethics. In the policy development stage, the company deliberates on and approves corporate policies regarding ethical approaches to AI. The recording stage entails collecting data on each use case or application using methods such as model cards. In the review stage, a systematic analysis is conducted to determine whether each use case meets the company’s criteria for ethical AI. Finally, in the action stage, decisions are made regarding whether to accept the use case as is or send it back for revision or rejection.
It is during the review and action stages that a company can truly assess whether its AI applications meet transparency, bias mitigation, and fairness standards. However, in order to establish these stages effectively, companies must have a significant number of AI projects and systems in place for gathering information. Governance structures should also be established to make decisions about specific applications. Many companies may not yet possess these prerequisites but will need them as they mature digitally with greater emphasis on AI.
Unilever, a British consumer packaged goods company known for brands like Dove and Ben & Jerry’s, is an example of a company that has taken early steps to address AI ethics. With a focus on corporate social responsibility and environmental sustainability, Unilever recognized the potential of AI to enhance its operations globally. The company established an Enterprise Data Executive and a governance committee to embed responsible and ethical AI use into its data strategies. Their goal was to leverage AI-driven innovation while promoting fairness and equity in society.
Unilever has successfully implemented all five stages of the AI ethics process mentioned earlier. As a starting point, they developed a set of policies that included guidelines such as ensuring significant decisions impacting individuals are not fully automated but involve human judgment. Unilever also adopted other AI-specific principles like holding accountable the owners of AI systems rather than blaming the system itself.
To ensure responsible development and unlock the full potential of AI, it became clear to Unilever’s committee members that policies alone would not be sufficient. They needed to develop a robust ecosystem of tools, services, and human resources to ensure the proper functioning of AI systems. Unilever acknowledged that many of their AI systems were developed in collaboration with external vendors, such as advertising agencies using programmatic buying software. Therefore, their approach to AI ethics extended beyond internal developments to encompass externally sourced capabilities as well.
Unilever’s commitment to comprehensive AI assurance led to the development of a compliance process that examined each new AI application for intrinsic risks related to both effectiveness and ethics. By integrating this process with existing compliance areas such as privacy risk assessment, information security, and procurement policies, Unilever ensured that no AI application could be deployed without undergoing review and approval. The compliance process involved internal evaluation by experts in complex cases or manual assessment by external professionals.
The proposed use cases were evaluated based on their potential ethical and efficacy risks, which were communicated to the proposers along with recommended mitigations. Statistical tests were conducted after developing the AI applications to assess bias, fairness, and efficacy. Cases involving significant risks that couldn’t be adequately mitigated would be rejected based on Unilever’s values. Final decisions about AI use cases were made by a senior executive board comprising representatives from legal, HR, data, and technology departments.
One example of Unilever’s AI application is the use of computer vision AI in its cosmetic brand sales areas within department stores. The project aimed to automatically register attendance through daily selfies and evaluate agents’ appearance. However, during the AI assurance process, the team realized the need for human oversight in reviewing flagged photos and taking responsibility for any necessary actions.
To aid in the AI assurance process, Unilever partnered with Holistic AI, a London-based company. This partnership enabled the review of AI assurance using a platform that encompassed all types of predictions or automation. Unilever’s data ethics team utilized this platform to monitor the status of AI projects submitted by various teams. The platform classified projects based on their risk level: red indicated non-compliance with Unilever standards and rejection; yellow represented acceptable risks with ownership assigned to the business owner; green den