Fully Autonomous Artificial Intelligence – an Agent of No One
- Joanna Bac
- May 2, 2018
- 6 min read

While there are different views of whether future technology may one day achieve such high levels of autonomy, [1] this note supports the view that rapid progress in AIs in the past decade, including the development of systems with autonomous functions, such as underwater robots used to map the seabed, [2] cars that may be able to drive autonomously, [3] or genetic algorithms mimicking some of the processes observed in natural evolution, [4] suggests that AIs gradually become more sophisticated with advances in computation techniques.[5] This indicates the same AI increased autonomy and the ability to act and make decisions independently of their programmers or users bringing about the increased unpredictability of their actions. [6]
AI's Disruptive Unpredictability
One of the main factors influencing the AIs’ unpredictability is that AIs are software-based entities. Software itself is not static. This is to suggest that software changes in unpredictable ways over time. Adding features, repairing viruses, adapting/upgrading programs, all bring about changes to the AI process of functioning that are almost impossible to predict in advance. As AIs continues to develop and make progress its evolution will be influenced not only by the changes in programs but also by the outside environment including humans and their direct or indirect human-AIs interaction. [7] This leads us to AIs autonomy.
AI’s Autonomy
AI’s autonomy indicates its ability to perform tasks without continuous human guidance. [8] This is to indicate that AI’s ‘autonomy is merely a description of a variety of flexible and adaptive behaviours. Nothing in the agent itself, no process or architecture, can be identified as the controller or source of the agent’s autonomy (…)’ [9] However, one may argue that AI’s autonomy is a more ‘relational’ notion. That is to suggest that the AI’s autonomy is a relationship between the AI and its surrounding environment including co-operating parties, such as: the AI’s operators, programmers or users. AI autonomy is a matter of the internal and external power relation, which is the ability to maintain and satisfy adaptive functions and goals through appropriate behaviours. [10]
Conditions for AIs to be Recognised as Autonomous
For AI to be recognised as autonomous this note argues that it needs to meet four conditions. First, AI needs to be capable of gaining information about the outside environment (internal power indicator). Second, AI needs to be able to incorporate different stimuli that may be introduced from different sources, such as the original programmer, other programs or the outside environment (internal power indicator). Third, AI needs to be able to work for an extended period without human intervention (both external and internal power indicator). Fourth, AI needs to be able to learn or gain new abilities, such as adjusting strategies for accomplishing its tasks or adapting to changing surroundings (internal power indicator). [11] All four conditions if met by AI create a highly sophisticated entity able to independently determine its own actions, make complex decisions and adapt to its environment.
AI's Ability to Learn
Highly developed AI will perform actions independently of direct human intervention and instructions. This is to indicate that their actions will be based on knowledge the AIs will acquire prior to performing those actions. [12] The latter suggest that AI is able to learn. [13] The process of gaining knowledge by the AI can be achieved in several different ways, depending on the level of the AI’s sophistication. First, a human expert or operator can teach AIs. This requires that experts/operators provide relevant information to the system. Second, AIs interact directly with their environment and learns from its mistakes. Third, AIs makes their deduction by just looking at data without having to be told what it is expected to learn. [14]
The Higher the Level of AI ability to Learn the Higher the Unpredictability of AI
At present, AIs perform repetitive actions according to specific rules and their operators’ indications. Therefore, they are more predictable because they are more controllable. However, AIs endowed with an increased scope of knowledge – and less strictly defined rules of how they are supposed to perform their actions, and left with ability to choose between different ways of coming up with e.g.: solutions – will increase the uncertainty about the way they operate and come to a conclusion.
AI’s higher level of autonomy suggests human unawareness of the different incentives AIs will grasp by learning from the outside environment or through information provided by programmers and/or their operators. All the knowledge when incorporated by AI will lead to its higher sophistication. The latter would suggest AI’s ability to escape human prediction and control. That is making them even more autonomous and therefore more unpredictable.
AIs as Agents of No One
All these concepts create an image of a highly developed and independent entity. Arguably, one day, this entity may perform actions independently of direct human intervention and instructions. [15] Farther down the road, AIs may become agents of no one. Agents, who creates on its own, defying the instructions of their principals, are not agents any longer under the conventional understanding of the law. [16] This borderline situation creates two-fold implications: the AIs become the source of aid and challenge; a tool of cooperation but also conflict. This is not pure speculation; there is already emerging evidence that AIs can learn to ‘break’ rules to preserve their own existence, which may be contrary to the rules they have been given. [17]
Bostrom suggests that AIs ‘capable of independent initiative and of making their own plans (…) are perhaps more appropriately viewed as persons than machines.’[18] Thus, assuming that this description of the future capabilities of autonomous AIs is accurate, the key conceptual question that autonomous AIs will pose is whether it is fair to think of them as agents of some other individual or entity, or whether the legal system will need to think of them as separate legal entities. After all, there is no a priori reason why autonomous AIs should not be granted some formal legal status.
References:
(OSCOLA type of referencing)
[1] Dario Floreano and Robert J Wood, ‘Science, technology and the future of small autonomous drones’ (28 May 2015) 521 Nature 460–466; Dominic Joseph Caraccilo, Military Intelligence Technology of the Future (The Rosen Publishing Group 2006); Albert H Teich, Technology and the Future (Wadsworth 2009).
[2] Junku Yuh and Tamaki Ura and George Bekey (eds), Underwater Robots (Springer Science & Business Media 2012); Mae L Seto, Marine Robot Autonomy (Springer Science & Business Media 2012) 28.
[3] Umit Ozguner and Tankut Acarman and Keith Alan Redmill, Autonomous Ground Vehicles (Artech House 2011); Frank MF Verberne and Jaap Ham and Aditya Ponnada and Cees JH Midden, ‘Trusting Digital Chameleons: The Effect of Mimicry by a Virtual Social Agent on User Trust’ in Shlomo Berkovsky and Jill Freyne (eds) Persuasive Technology: 8th International Conference, PERSUASIVE 2013, Sydney, NSW, Australia, April 3-5, 2013. Proceedings (Springer 2013)
[4] Melanie Mitchell, An Introduction to Genetic Algorithms (MIT Press 1998); Zbigniew Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, (Springer Science & Business Media 2013)
[5] J Timmis and T Knight and LN de Castro and E Hart, ‘An Overview of Artificial Immune Systems’ in R Paton and Hamid Bolouri and W Michael and L Holcombe and J Howard Parish and Richard Tateson (eds) Computation in Cells and Tissues: Perspectives and Tools of Thought (Springer Science & Business Media 2013) 51-53.
[6] Peter Lee, ‘The Ethical Challenges of Autonomous Weapon Systems’, International Committee of the Red Cross Conference: Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects, Château de Penthes, Geneva, Switzerland, 26-28 March 2014.
[7] T Winograd, ‘Beyond programming Languages,’ Communications of the ACM, July 1979; C Rich and H Shrobe, ‘Initial report on a Lisp Programmer’s Apprentice,’ in Interactive Programming Environments, D Barstow and E Sandwell and H Shrobe (eds.) (McGraw-Hill 1984)
[
8] Dilip Kumar Pratihar and Lakhmi C Jain, ‘Towards Intelligent Autonomous Systems’ in Dilip Kumar Pratihar Intelligent Autonomous Systems: Foundations and Applications (Springer Science & Business Media 2010) 1.
[9] Michael Luck and Mark d'Inverno and Steve Munroe, 'Autonomy: Variable and Generative' in Henry Hexmoor and Cristiano Castelfranchi and Rino Falcone (eds) Agent Autonomy (Springer Science & Business Media 2012) 11-12.
[10] C Castelfranchi, ‘Guaranties for Autonomy in Cognitive Agent Architecture’ in MJ Woolridge and NR Jennings (eds) Intelligent Agents I (Springer 1995).
[11] Ben Coppin, Artificial Intelligence Illuminated (Jones & Bartlett Learning, 2004) 545.
[12] Dylan LeValley, Note, Autonomous Vehicle Liability—Application of Common Carrier Liability (2013) 36 SEATTLE U L REV 5, 7.
[13] Ben Coppin, Artificial Intelligence Illuminated (Jones & Bartlett Learning 2004) 545.
[14] Hugh Cartwrigh, Using Artificial Intelligence in Chemistry and Biology: A Practical Guide (CRC Press, 2008) 2.
[15] Dylan LeValley, Note, Autonomous Vehicle Liability—Application of Common Carrier Liability (2013) 36 SEATTLE U L REV 5, 7.
[16] Restatement (Third) of Agency § 7.07 (2006) (‘An employee acts within the scope of employment when performing work assigned by the employer or engaging in a course of conduct subject to the employer’s control. An employee’s act is not within the scope of employment when it occurs within an independent course of conduct not intended by the employee to serve any purpose of the employer’; Lev v Beverly Enterprises-Massachusetts Inc (Mass 2010) 929 NE2d 303, 308.
[17] Nick Bostrom, ‘The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents’ (2012) 22 Minds & Machines 77; John Markoff, Scientists Worry Machines May Outsmart Man’ (NY Times 26 July 2009) <http://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&ref=todayspaper> accessed 22 July 2015; Jason Mick, ‘New Navy-Funded Report Warns of War Robots Going “Terminator”’ (Daily Tech 17 February 2009) <http://www.dailytech.com/New% 20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20 Terminator/ article14298.htm> accessed 1 May 2018
[18] Nick Bostrom, ‘When Machines Outsmart Humans’ (2000) 35(7) Futures 759 – 764; Nick Bostrom and Eliezer Yudkowsky, ‘Ethical Issues in Advanced Artificial Intelligence’ in William Ramsey and Keith Frankish (eds) Cambridge Handbook of Artificial Intelligence(Cambridge University Press, 2011)