Whose ethics?

Lachlan McCalman
Gradient Institute
Published in
7 min readMar 25, 2019

--

A discussion of who decides the ethical stance encoded into an ethical AI

We at the Gradient Institute are often asked who decides the particular ethical stance encoded into an “ethical AI”. In particular, because we work on building such systems, the question also takes the form of “whose ethics” we will encode into them. This post is to address such questions.

The Short Answer

The people ultimately responsible for an AI system must also be responsible for the ethical principles encoded into it. These are the senior executives and leaders in business and government who currently make strategic decisions about their organisation’s activities. The ethical principles that they determine will, as with the decision-making they already do, be informed by a mixture of their organisation’s objectives, legal obligations, the will of the citizens or shareholders they represent, wider social norms and, perhaps, their own personal beliefs. Gradient Institute has neither the moral nor legal authority to impose its own ethical views on AI systems, nor the domain expertise to balance the complex ethical trade-offs inherent in any particular application.

Instead, Gradient Institute’s role in creating ethical AI is to help those senior executives and leaders with two key challenges: precisely specifying and incorporating their ethical stance into the systems they oversee, and actually creating an AI system that can measure, control and communicate the ethical consequences of its actions.

The Longer Answer

Ethics is the study of how we as humans should act. As such almost every consequential decision we can imagine making can be viewed through an ethical lens. It is an all-too-common mistake, for example, to view a financial decision as being solely within the purview of economics. On the contrary, that decision will likely have social, environmental, psychological and physical consequences as well, perhaps for many people. In general any significant decision will have direct and indirect consequences to the wellbeing of ourselves, others, and the world around us.

The study of ethics is not limited to examining rarefied philosophical questions. It is relevant in our everyday lives and in the everyday functioning of society. Indeed, much of the structure of society is set up precisely for the purpose of making ethical judgements; taking many and varied goals, desires, preferences, and potential consequences, and weighing them to inform a single decision. Governments and democracy, boardrooms and bureaucracy, councils and collectives, are all social tools for deciding how to act when a decision has a variety of impacts and affects many people. In other words, they are tools for making ethical decisions.

So to answer whose ethics should be put into an organisation’s AI, look to whose ethics go into that organisation’s decisions already. Does the organisation have an executive, a board, or a minister? The decision-making structures that are already in place are ultimately what determine the organisation’s conduct, and as such reflect the true ethics of the organisation — whether intentionally or not. Therefore if an organisation wants the AIs it develops to behave in a way that is ethically consistent with its current decision-making processes, principles and structures, it’s in those processes, principles and structures that it should seek guidance.

The Objectives of AI Systems

If existing decision-making structures are responsible for the ethics of their systems, does that mean existing AI systems already encode that ethics?

Unfortunately not. AIs today are typically motivated by the simplest of possible objectives like maximising profit or customer interactions. The responsible parties governing them likely have a much more sophisticated set of goals and constraints than this, but these are not currently built into AI systems.

Consider the case of an AI system whose goal is specified as maximising the number of tax cheats caught, by auditing individuals based on a predicted risk score. If the historical data indicates even a slightly higher rate of infringement by a minority group, the AI is likely to select members of that group overwhelmingly. The responsible party might reasonably respond to the ensuing public outcry by exclaiming that whilst they can see the behaviour is unfair, they never told the algorithm to do it. The responsible party could see a clear breach of their own ethics after the fact, but failed to provide the AI with a goal to be fair to the minority group beforehand.

A major reason why so few AI systems today are encoded with any ethical considerations is that they must be translated into mathematics in order for an AI to understand them, and doing so is exceedingly difficult. Unfortunately, it’s not possible to say to the algorithm “be reasonably fair whilst achieving your goal”. This statement is not precisely defined enough to be useful. AIs must have explicit mathematical definitions of ‘reasonably’ and ‘fair’ for this specific context. Finding a sufficiently accurate and complete way to perform this translation is one of the key challenges of building ethical AI, and is an open research problem.

Note too that telling a human to “be reasonably fair whilst achieving your goal” isn’t very helpful either. All it’s really done is delegate the ethical question to the human, who now must still define for themselves what fair and reasonable look like. Two different humans will almost certainly treat this instruction two different ways, and without a more precise definition it isn’t even possible to say which one took the better approach.

Despite the difficulty of precisely defining an ethical objective, we can already do better than a single consideration like ‘maximise profit’. Returning to the auditing example, if the responsible party wanted to prevent the algorithm from unfairly targeting minorities, then they could work with technical experts to formulate a statistical measure of what fairness means in the context of their system, and determine a quantitative line in the sand beyond which the algorithm cannot allow this measure to pass. Finally, the responsible party could select the location of that line, by finding their preferred trade-off between the level of fairness and the system’s primary goal.

Addressing the Problem Today

Clearly a process like the one described above will be unfamiliar to many CEOs, boards and department heads today. It is likely they lack the technical skills to do it by themselves; instead it will require new and close collaboration between them and their technical staff. The responsible party’s choices of what should be done must be informed by technical expertise on what can be done.

The decision-making structures themselves may also have to alter to accommodate the changes AI brings. AI allows actions on an unprecedented scale: we can literally impact the lives of millions of people in a single day with these systems. What it means to be responsible for such a system is fundamentally different from supervising a handful of human workers. Responsible parties will therefore need new skills and tools to meaningfully exercise their duties.

Unfortunately, even if we successfully encode the intent of a decision-making body into an AI system we may not have created a system that acts ethically. The system may have an incomplete understanding of the potential consequences of its actions, thereby failing to act in accordance with its ethical intent. Or, it may be that the intent itself comprises short-term or selfish goals that would not hold up to greater scrutiny.

A Scientific Approach to Ethics

We may be able to improve the ethical behaviour of an automated system by getting better at understanding the consequences of its actions. Some of the disagreement about ethics is really a disagreement about the likely consequences of a specific action or policy. Both sides of politics want to make the poor better off, but is that best achieved by lowering tax on companies to create jobs or increasing the minimum wage? Advances in causal inference, aided by the quantities of data AI systems are collecting, may help resolve some of these disputes scientifically. The more evidence accrued for a decision’s consequences, the harder it is to justify making decisions that will cause unnecessary harm. We’ll expand on this relationship between ethics and causality in a future blog post.

Another force that may work to improve the overall ethics of our society is, perhaps counter-intuitively, AIs’ need for clear instructions. Because all goals, constraints, and trade-offs must be provided to an AI in explicit mathematical form, they can be subjected to scrutiny. There can be no ambiguity about a responsible party’s intentions and priorities once the mathematical objectives of their AI are on display.

By making these mathematical objectives more transparent we encourage better and more ethical actions. Making them public may be appropriate in some cases, whilst in others, like law enforcement, good may still be done by providing the objectives to oversight bodies like regulators or the judiciary.

Conclusion

“Whose ethics” get encoded into an AI decision system is a specific case of a much broader question: How do we, as a society, balance the competing objectives and wellbeing of individuals and groups? We currently address the question with a wide variety of decision-making structures like governments, boards and councils, who are governed by laws, rules, and norms specific to the domain of their concern.

Choosing the intent for an AI system is no different in principle, and must engage those same decision-making structures. However, it is different in practice; it requires solving challenges like precisely encoding complex objectives in mathematics, and understanding the many direct and indirect consequences that arise from the system’s possible actions.

Ultimately, it is the responsible party, not the programmers, who will have to make difficult ethical trade-offs amongst their objectives, crafting their ethical position for their AI system.

--

--