Image by Gerd Altmann on Pixabay
Ideas for Leaders #814

How to Keep Your AI Ethical

This is one of our free-to-access content pieces. To gain access to all Ideas for Leaders content please Log In Here or if you are not already a Subscriber then Subscribe Here.

Key Concept

While the significant and positive impact of Artificial Intelligence (AI) on business and society at large is well known, less attention is paid to the potential for unethical applications or outcomes of the new technology. A framework developed by Oxford University researchers offer an action plan for ensuring the ethical application of AI.

Idea Summary

While the technological capabilities and impact of artificial intelligence (AI) has brought significant change to multiple facets of business and even society, the core of AI is still machines, not humans. And while these machines can learn, they cannot discern right from wrong—unless we deliberately step in to add an ethical dimension to AI. Where to start?

In a report co-sponsored by the Oxford Future of Marketing Initiative and the International Chamber of Commerce, a team of researchers from Oxford University’s Saïd Business School review and analyse the academic research in AI ethics, as well as ethical AI-related business statements and governmental and intra-government documents, to develop a framework for maintaining ethical boundaries in the use of artificial intelligence.

The framework’s first step is to develop a hierarchical set of principles—hierarchical in the sense that major, overriding principles are broken down into smaller principles. 

The two fundamental principles of ethical AI are responsibility, which refers to the processes supported or driven by AI, and accountability, which refers to the outcomes of AI-related activities and operations. 

Ensuring accountability begins with proactive leadership, and also includes reporting, contesting, correcting, and liability.

Responsibility is built on a more complex set of components, starting with three key principles: human-centric, fair, and harmless. Human-centric is concerned with the rights and self-determination of individuals, as well as the domains that benefit humans, such as sustainability. Thus, human-centric processes are processes that are transparent, intelligible and sustainable, as well as beneficial. The principle of fairness is achieved though processes that are just, inclusive, and non-discriminatory. Finally, harmless systems are safe, robust, and private.

Using the parameters just described as a guide, the next step in ensuring the ethics of AI applications and use in an organization is to identify where the risks of unethical AI can occur. The first risk ‘bucket’ is data. For example, the selection of data may be discriminatory or invade the privacy of individuals. The second risk bucket involves algorithms—the set of instructions at the heart of AI that might be influenced by the biases of those developing the algorithms. The final risk bucket is business use, which covers business goals—i.e., AI is used to achieve unethical business goals—and deployment—i.e., users can subvert the original ethical intention of AI towards unethical activities, including activities with adverse societal consequences.

With principles and risks identified, an organization can now take practical steps to ensure the ethical application of AI. The first step is a statement of intent, similar to a mission or vision statement that proclaims the organization’s commitment to ethical AI values, policies and practices. The second step is to implement an ethical AI plan for the organization that would include: 

  1. a specific plan for each application of AI for identifying any ethical concerns and risks associated with data, algorithms and business use;
  2. management and mitigation strategies for each risk identified; and
  3. a careful record of all actions and decisions related to the identification, management and mitigation of ethical AI risks.

As new risks are discovered or emerge, the original application plans can be updated. 

Business Application

Borrowing an analogy from its developers, this framework and action plan for applying AI ethically offers both a ‘flight plan’—consisting of the ethical AI statement of intent—and a ‘flight checklist’ for each application of AI in an organization. The checklist allows the organization to monitor and manage the sources of potential ethical issues in its data, algorithms and AI business use, ensuring that AI in the organization leads to outcomes that are human centric, fair and harmless.

It’s important to note the dynamic nature of the framework. The vigilant monitoring for potential ethical issues in its applications of AI not only allows the organization to identify problem areas, but also to then put in safeguards and preventive measures, thus increasing the robustness of its commitment to ensure responsible and accountable processes.

Contact Us

Authors

Institutions

Source

Idea conceived

  • July 2021

Idea posted

  • February 2022

DOI number

10.13007/814

Subject

Real Time Analytics