Legal Personhood of Autonomous Software Intelligence (SI) & Protection of Human Rights
(Image by Getty Images)
AI’s autonomy indicates its ability to perform tasks without continuous human guidance. [1] This is to indicate that AI’s ‘autonomy is merely a description of a variety of flexible and adaptive behaviours. Nothing in AI itself, no process or architecture, can be identified as the controller or source of AI’s autonomy (…)’ [2] However, one may argue that AI’s autonomy is a more ‘relational’ notion. That is to suggest that the AI’s autonomy is a relationship between the AI and its surrounding environment including co-operating parties, such as: the AI’s operators, programmers or users. Hence, AI autonomy is a matter of the internal and external power relation, which is the ability to maintain and satisfy adaptive functions and goals through appropriate behaviours. [3] As such this highly skilful AI rises to the level of an agent. Meaning it can perceive its environment through sensors and act upon that environment. The above definition reminds us about AIMA Agents.
Artificial intelligence: a Modern Approach (AIMA)
The AIMA Agent is ‘an agent that is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors.’ If environment is whatever provides input and receives output, and take receiving input to be sensing and producing output to be acting, every program is an agent. Therefore, if one wants to come to the conclusion about the difference between an agent and a program one needs to focus on defining the ‘environment’, ‘sensing’ and ‘acting’ in a manner that would reflect those differences. With this in mind, this study employees the Wooldridge-Jennings’s definition of an agent, which states:
‘(…) a hardware or (more usually) software-based computer system that enjoys the following properties: - autonomy: agents operate without the direct intervention of humans or others, and have some kind of control over their actions and internal state; - social ability: agents interact with other agents (and possibly humans) via some kind of agent-communication language; - reactivity: agents perceive their environment, (which may be the physical world, a user via a graphical user interface, a collection of other agents, the internet, or perhaps all of these combined), and respond in a timely fashion to changes that occur in it; - pro-activeness: agents do not simply act in response to their environment, they are able to exhibit goal-directed behaviour by taking the initiative.’ [4]
AI Autonomy and Legal Personhood
As one may observe from the analysis above granting legal personhood to self developing, self learning and autonomously acting AI can be considered a logical and reasonable move when AI autonomously can cause ‘devastating damage' and the agency principles for less advanced applications are not suitable. This is because the principal, such as the producer, operator, or designer have no control over the action. Arguably the growing unpredictability of AI subjects them to legal control. It can be simply argued that even though it is hard to conceptualize, according to law, legal personality is not exclusively granted to humans and if in English admiralty law ships are treated as legal persons, and other legal systems have recognized corporates as legal person there is nothing that could stop legal systems for granting these same rights to AI. Yet one must consider whether their unpredictability is a condition sufficient to provide AI with their legal personhood. Can all types of inanimate objects be granted their legal personhood? How is that some of the inanimate objects, such as vessels or corporates have legal personhood but other e.g.: batting practice pitchers or self-driving cars, do not? Can all AI be qualified as capable of holding legal rights and duties that comes with legal personhood?
Conditions for AI to be Recognised as Autonomous
For AI to be recognised as autonomous it needs to meet four conditions. First, AI needs to be capable of gaining information about the outside environment (internal power indicator). Second, AI needs to be able to incorporate different stimuli that may be introduced from different sources, such as the original programmer, other programs or the outside environment (internal power indicator). Third, AI needs to be able to work for an extended period without human intervention (both external and internal power indicator). Fourth, AI needs to be able to learn or gain new abilities, such as adjusting strategies for accomplishing its tasks or adapting to changing surroundings (internal power indicator). [5] All four conditions if met by AI create a highly sophisticated entity able to independently determine its own actions, make complex decisions and adapt to its environment. In the not so distance future we will see AI that evaluates its environment and takes actions autonomously becoming agents of on one. Hence the agency principles for those less advanced applications will not be suitable.
Conclusion
AI gradually becomes more sophisticated with advances in computation techniques. This indicates that AI increased autonomy and the ability to act and make decisions independently of their programmers or operators brings about the increased unpredictability of AI and its actions. Consequently, legal personhood of AI will be the necessary step for legal systems to take and in doing so protect those whose laws are supposed to serve, Humans.
Bibliography
[1] Dilip Kumar Pratihar and Lakhmi C Jain, ‘Towards Intelligent Autonomous Systems’ in Dilip Kumar Pratihar Intelligent Autonomous Systems: Foundations and Applications (Springer Science & Business Media 2010) 1.
[2] Michael Luck and Mark d'Inverno and Steve Munroe, 'Autonomy: Variable and Generative' in Henry Hexmoor and Cristiano Castelfranchi and Rino Falcone (eds) Agent Autonomy (Springer Science & Business Media 2012) 11-12.
[3] C Castelfranchi, ‘Guaranties for Autonomy in Cognitive Agent Architecture’ in MJ Woolridge and NR Jennings (eds) Intelligent Agents I (Springer 1995).
[4] M Wooldridge and MR Jenkins, ‘Intelligent Agents: theory and practice’ (1995) Knowledge Engineering Review 115-152.
[5] Ben Coppin, Artificial Intelligence Illuminated (Jones & Bartlett Learning, 2004) 545.
Comentários