top of page

"AI Fairness Theory" - Artificial Intelligence a Reasonable Decision Maker towards Equilibrium of Arrows Paradox.



Abstract

This study offers an AI Reasonable Choice Theory (AI-RCT or AI Fairness Theory) where AI has been defined as a design and framed as a multi-criterion decision-maker. AI is understood as a reasonable decision maker when its choices are governed by maximizing well-defined stable preferences. For AI to make reasonable decisions is to choose the alternative for which AI can advance the most textual/contextual justification to validate its choice. Where reasonableness is limited by the amount of information acquired by AI decision maker and justifications may vary according to different set of circumstances encountered by AI. AI Fairness Theory draws on the theory of Judgment Aggregation, Pointing and Justification (PJ-X) model, Condorcet Method & Arrow’s General Possibility Theorem. This study concludes that whereas rationality in the economic sense is not necessarily incompatible with AI’s textual/contextual -dependent choice procedure, if it is, then AI’s choice cycles could be reasonable.


Key words

Artificial intelligence, Decision maker, Reasonableness, Judgment aggregation, Condorcet method, Pointing and Justification (PJ-X) model, Arrow’s General Possibility Theorem


Terminology

Artificial intelligence (AI) is a design framed as multi-criterion decision-making which is constructed from one or more computer programmes including the applications and the operating system used, or capable of being used, for the generation of inventions beyond what it has been programmed to do (this implies AI ability to go above human expectations) and constant intellectual movement, understood as increasing its knowledge by learning and its abilities to generate unforeseen results. Furthermore, AI is a design that lacks consciousness. Consciousness has been defined as the state or quality of awareness, or, of being aware of an external object or something within oneself. Thus lacking self-interest. A human being, on the other hand, has been defined as a design having consciousness, self-awareness, self-interested and framed as multi-criterion decision-making.


I. Introduction

Consciousness and self-awareness is one of the largest divides between human beings and artificial intelligence. While we may not fully understand ourselves, we can offer up an economic rationale for our decisions. One of the basic concepts of economic theory is that a decision maker (a human being) makes choices in his or her best self-interest. This is known as the Rational Choice Theory (RCT). RCT states that a decision maker (a human being) will make a choice that maximizes his or her own happiness. For example, a decision maker (a human being) determines that in looking at all of her needs, a new red dress is her top priority. It is in her best interest to use her savings to purchase this new dress. A result in choice theory establishes that this is essentially the case when a decision maker’s choices are internally consistent. Human beings can justify their decisions with natural language and point to the evidence which led to their decisions.


To the contrary, AI is usually only programmed to provide an answer based on the data it has been programmed or it has learned. This is to say that we can recognize the outcome if its actions but most of the time we are not aware how they arrived at them. There is a growing field of research postulating that AI can justify their decisions. Research from University of California, Berkeley, and the Max Planck Institute for Informatics propose Pointing and Justification (PJ-X) model which can justify AI’s decision with ‘a sentence and point to the evidence by introspecting its decision and explanation process using an attention mechanism.’ According to that research AI analyses the data in two ways: one to answer the original question, and another that identifies the data used to answer the question so it can translate it into English. AI was requested to consider images in Figure 1.



Figure 1: Pointing and Justification (PJ-X) model


In both examples, the question “What is the person doing?” was asked, and AI correctly answered “Skiing.” As noticed by the team ‘[t]hough both images share[d] common visual elements (e.g., skis and snow), the textual/contextual justifications reflect differences in the two images: while one justifies the answer “Skiing” by discussing skis and mountain, the other justifies the answer with skis, hill, and clothing.’ In line with the above, one can argue that economists with regard to human beings advocate a self-interest interpretation of their choices, computer scientists with regards to AI, however, advocate a more textual/contextual-sensitive interpretation of AI choice behavior. According to the latter, an AI decision maker constructs its preferences at the time of choice, because it is at this moment in time that AI is confronted with set of data and needs to make a decision without the influence of self-interest or self-awareness.


This study proposes a precise conceptualization of such behavior and name it a reason-based choice behavior, referred to as the AI reasonable choice theory. An AI decision maker makes a reasonable decision because it chooses that alternative from the available one that brings about the most textual/contextual-dependent reasons in its favor. This study defines a reason for an alternative to be an ordered pair of alternatives that lists this alternative as its first entry and another one as its second.


The textual/contextual sensitivity of reasons is reflected by the fact that in any choice situation only ordered pairs of alternatives which are both available count as reasons that are relevant for the AI decision maker’s choice in this situation. As illustrated by the Figure 1 an AI decision maker needs to choose from a given set of alternatives. When faced with the problem of choosing from the given set of alternatives, the AI decision maker following the AI-RCT counts for each available alternative the number of relevant reasons in its favor and chooses that alternative which secures the maximum amount of such reasons. Counting a reason with multiplicity, allows the AI-RCT to grasp the idea that an alternative may be superior to another one on several dimensions or with respect to several properties or aspects.


This line of reasoning has been build on the basis of Tversky’s thought which defines a choice option as a set of properties. He proposes a procedure of selection functioning as an algorithm:


‘(a) the common characteristics of the considered choice set are eliminated, as any discriminating choice cannot be based on them;

(b) a characteristic is randomly selected and all the options not having this characteristic are eliminated. The higher the utility of a characteristic is, the larger the probability of selecting this characteristic is;

(c) if remaining options still have specific characteristics, one turns over at the first stage. In the contrary, if the residual choices have the same characteristics, the procedure ends. If only one option remains, it is selected. In the contrary, all remaining options have the same probability to be selected.’


In line with the above, let us consider the following example that will employ the scenario used by the Pointing and Justification (PJ-X) model. Let us assume that the AI decision maker faces the choice between three hypothetical winter sports: Alpine Skiing, Snowboarding, and Bob Sledding which differ with respect to the aspects listed in Table 1.



As can be seen from the table listed above Alpine skiing and Snowboarding are not comparable with respect to the tools they employ. Snowboarding and Bob Sledding are neither comparable with respect to the shape of the land they take place nor regarding the ways their get to the starting point of their adventure. Alpine Skiing and Bob Sledding, on the other hand, are neither comparable with respect to the shape of the land they take place nor regarding the ways their get to the starting point of their adventure. In addition, they are not compatible with regards tools they employ. Say, this translates into two reasons to prefer Snowboarding over Bob Sledding and one reason to choose Alpine Skiing over Snowboarding. One could argue that following the RCT an AI decision maker would choose Alpine Skiing because it secures the most relevant reasons in its favour.


This study, instead of writing down a list of which characteristics are preferred to which other aspects, indicates a score next to each characteristic, on a scale from 0-3. It adds up the numbers that an AI decision maker would put down for each characteristic. According to the calculation Alpine Skiing gains 8 points, Snowboarding 5 points and Bob Sledding 1. The AI as a reasonable decision maker will choose any winter sport with a higher total score to any winter sport with a lower total score.


Now, let us consider weather this same reasoning could be employed by AI decision maker to decide on human qualities and choose the winner. The AI decision maker faces the choice between three hypothetical presidential candidates: Alpha, Beta, and Gamma which differ with respect to the aspects listed in Table 2.



Following the previous line of reasoning Alpha and Beta are neither comparable with respect to their understanding of the external affairs nor regarding their understanding of economics, but Alpha is more intelligent and has higher level of understanding of the external affairs than Beta has. Say, this translates into two reasons to prefer Alpha over Beta. Likewise, there is one reason to choose Beta over Gamma. They can only be compared regarding their understanding of economics and Beta is has more understanding than Gamma has. Analogously, there is only one reason to prefer Gamma over Alpha – Gamma has better understanding of external affairs. When faced with the choice from the set of all three hypothetical presidential candidates, an AI decision maker which follows the AI Fairness Theory and consents to these reasons then choose Alpha because this candidate secures the most reasons (among the one presented to AI decision maker) in the former candidate’s favour. [Small caveat here, the need to think and argued for the creation of positive instructions for AI decision makers’ and their interpretations (e.g.: some type of laws of evidence)].


To illustrate how would AI Fairness Theory works this study reconsiders the example about the hypothetical presidential candidates. Before getting into these matters let us consider the basics of set theory. Let as assume that S is a set, a collection of elements such as the set of presidential candidates; S = {a, b, c) where a = Alpha, b = Beta and c = Gamma. R, on the other hand, would denote the relevant comparison dimensions. So R = {>economics, > external affairs, > internal affairs, > intelligence}. The description of the prospects implies > internal affairs = > intelligence > = {(a,b)}, because Alpha is both more skilled in internal affairs and intelligence, > personal charm = {(b, c)}, because Beta is more skilled in economics than Gamma is, and > external affairs = {(c,a)}, because Gamma is more skilled in external affairs than Alpha. So, an AI decision maker following the AI Fairness Theory with the set R of reasonales will choose Alpha, γ(S) =a, because the AI decision maker will have the most reasons to choose Alpha in that particular context.


­

With this in mind this study formalizes the interpretation of choice behaviour according to which an AI decision maker chooses that alternative from the available ones for which it has the most text/context-dependent reasons to do so. A choice function γ is a reasonable whenever there exists a set of reasonales R such that for all S∈P(X),


An alternative interpretation for the simplest version of the AI Fairness Theory is offered by the theory of social choice. In a voting system such as the Condorcet method, the candidate that wins by majority rule in all pairings against the other candidates is elected, whenever one of the candidates has that characteristic. As argued by Pivato ‘[i]ndeed, it is easy to construct examples where the Condorcet winner does not maximize social welfare [however] in a large population satisfying certain statistical regularities, not only is the Condorcet winner almost guaranteed to exist, but it is almost guaranteed to also be the fair social choice.’


Furthermore, it is argued here that the structure of the AI Fairness Theory also applies to judgment aggregation problems. Wilson posed a question whether Arrow’s impossible theorem in social choice extends to the aggregation of attributes other than preferences. Such aggregation corresponds to the AI Fairness Theory’s summation over AI textual/contextual reasons rather than over complete and transitive preferences. Let us assume that for each binary choice comparison there is a group of AI decision makers who have knowledge on the problem at hand and that we assign this group the exclusive right to determine the collective judgment in this choice situation. Then the AI Fairness Theory could capture such aggregation problems provided that the collective judgments would always lead to a unique choice. This is analysed in the following section.


II. Arrow’s General Possibility Theorem and AI Decision Maker towards Equilibrium of Arrow’s Paradox


Suppose that we have not one but more AI decision makers and they are required to come to an agreement on which course of action to take. Will they come to a reasonable set of outcomes as with regards to the single AI decision maker? It is argued that Arrow’s General Possibility Theorem could have direct application to AIs and their decision making process. AI and its decision making process is based on the aggregation of preferences. Hence the problem of AI making a decision may be compared to the problem of social choice, or group decision-making, in which the rankings of several alternatives by decision makers are to be combined into a single, “social,” ranking (e.g.: voting). As noticed by Scott and Antonsson ‘[i]n social choice, there is no longer a single decision maker, and the goal is to arrive at rational decisions that respect the sovereignty of the individual citizens involved in the decision. In the theory of group decision-making, a well-known objection to the validity of combining separate weak orders into a single social order is Kenneth J. Arrow’s General Possibility Theorem.’


Arrow’s ‘general possibility’ or ‘impossibility’ theorem provides an answer to a very basic question in the theory of collective decision-making. Supposed decision makers (human beings) are presented with some alternatives to choose among. They could be candidates in election, public projects, winter sports or, or just about anything else. There are decision makers whose preferences will inform this choice, and the question is: which procedures are there for deriving, from what is known or can be found out about their preferences, a collective or “social” ordering of the alternatives from better to worse? Arrow’s answer is startling. ‘[Arrow’s theorem] implies that it is not possible to guarantee that a majority rule process will yield coherent choices.’ He argues there are no such procedures that will satisfy certain apparently quite reasonable assumptions concerning the autonomy of the decision makers and the rationality of their preferences.


Arrow proposed four conditions to formalize a desirable decision situation for the decision makers’ choice:


First condition:

The system should reflect the wishes of more than just one individual (so there's no dictator).

Second condition:

If all voters prefer option A to option B, then A should come above B in the final result (this condition is sometimes called unanimity).

Third condition:

The voting system should always return exactly one clear final ranking (this condition is known as universality).

Fourth condition:

In the final result, whether one option is ranked above another, say A above B, should only depend on how decision makers voters ranked A compared to B. It shouldn't depend on how they ranked either of the two compared to a third option C. The last condition has been called independence of irrelevant alternatives.


Arrow's theorem is pointing out that under the fourth conditions listed above when decision makers have particular conflicting views they are unable to make fair decision and are preoccupied by a circular discussion about what they collectively want. This happens because; self-interest is arguably the single largest motivator of economic activity. Adam Smith, in his book covering the subject, The Wealth of Nations, described it this way: ‘It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.’ Consequently, Arrow has provided a convincingly positive analysis about the interaction between decision makers and lack of reasonableness in their group preferences. This study outs forward a view that decisions made by AI decision makers, however, will lead to the equilibrium of Arrow’s Theorem. At least in theory there is no possibility for AI self-interest to exist. The next issue then arises: How will the AI decision makers handle decision-making process in light of Arrow’s theorem? Part III of this study considers the process. Part IV argues for the creation of positive instruction for AI decision makers’ and their interpretations. Part V concludes the findings.


[TBC]


References:

(OSCOLA style of referencing)


[1] Jo Bac, Artificial Intelligence (AI), Dependent Legal Personhood & AI-Human Amalgamation – An Evolutionary Step for US Patent Law and AI (Edward Elgar publishing, in press).

[2] Robert van Gulick, ‘Consciousness’ in Stanford Encyclopaedia of Philosophy (2004) <https://plato.stanford.edu/entries/consciousness/> accessed 15 July 2018.

[3] See, for example, Jo Bac, ‘Consciousness & Artificial Intelligence (Part 1) - Mind Economics’ The Conversation, (The LightbulbAI, 25 March 2018) <www.lightbulbai.com/single-post/2017/04/20/Artificial-Intelligence-Consciousness> accessed 15 July 2018; Jo Bac, ‘Artificial Intelligence, Consciousness & Passion - Computational Theory of Mind & Economics’ (The LightbulbAI, 9 May 2018) <www.lightbulbai.com/single-post/2017/04/23/Consciousness---Mind-Economics-Artificial-Intelligence-Part-2---Computational-Theory-of-Mind> accessed 15 July 2018; Junichi Takeno, Creation of a Conscious Robot: Mirror Image Cognition and Self-Awareness (CRC Press 2012).

[4] Kenneth J Arrow, ‘Economic Theory and the Hypothesis of Rationality’ in The New Palgrave: Utility and Probability ([1987] 1989) 25-39.

[5] ibid.

[6] Dong Huk Park et all, ‘Attentive Explanations: Justifying Decisions and Pointing to the Evidence’ <www.groundai.com/project/attentive-explanations-justifying-decisions-and-pointing-to-the-evidence/> accessed 15 July 2018.

[7] ibid.

[8] ibid.

[9] Reynald-Alexandre Laurent, ‘“Elimination by aspects” and probabilistic choice’ < http://reynald.laurent.free.fr/EPA%20choix%20proba%20short%20GB2.pdf> accessed 15 July 2018.

[10] Msrcus Pivato, ‘Condorcet meets Bentham" (2015) Journal of Mathematical Economics 59, 58–65; See also Robin Farquarson, Theory of Voting (Oxford 1969).

[11] ibid.

[12] Robert Wilson, ‘On the Theory of Aggregation’ (1975) 10(1) Journal of Economic Theory 89–99.

[13] There are several properties of preferences that together imply that decision makers’ choices will be consistent. Economists assume that consumers have a set of preferences that they use to guide them in choosing between goods. These preferences have to satisfy three properties: completeness, and transitivity. See, for example: Alberto Bisina and Thierry Verdierb, ‘The Economics of Cultural Transmission and the Dynamics of Preferences’ (2001) 97(2) Journal of Economic Theory 298-319.

[14] Michael J Scott and Erik K Antonsson, ‘Arrow’s Theorem and Engineering Design Decision Making’ (2000) 11(4) Research in Engineering Design 218-228.

[15] Kenneth A Shepsle, ‘Congress is a ‘They,’ Not an ‘It’: Legislative Intent as Oxymoron (1992) 12 Int’L Rev L & Econ 239, 241.

[16] ibid.

[17] Kenneth J Arrow, Social Choice and Individual Values (1st ed, J Wiley 1951) 24-30.

[18] Adam Smith, The Wealth of Nations: Books I-III (New edn, Penguin Classics 1982).

[19] ibid 26-27.

Single post: Blog_Single_Post_Widget
bottom of page