Medical Robots and Questions of Ethics & Liability
(Getty images)
Chess teaches you to play by the rules and take responsibility for your actions, how to problem solve in an uncertain environment.
(Garry Kasparov)
I. Introduction
The ‘self-acting robot’ image of an inanimate entity escaping human control is millennia old. But only recently have advances in artificially intelligent technology such as robotics moved this into the realms of possibility. The list of examples includes: rehabilitation robots that support a patient's legs and hands during motor therapy, a defibrillator device, educational robots such as virtual mentors for learners, intelligent tutor systems that tracks ‘mental steps’ of the user during the problem-solving tasks to predict the user’s understanding of the system, and rehabilitation drones.
Robots and softbots in the areas of rehabilitation have reached a stage where they are capable of acting autonomously. The adjective autonomous translates into their ability to self-act, self-learn and self-develop. This, in turn, transforms the relationship between users and this intelligent technology because in order to act robots or softbots may not require direct human input or control. As suggested by some scholars this may improve the life of the whole of society. Yet, as indicated by others such as Desouza, ‘the proliferation of these technologies may have unintended consequence.
The consequences, which have been outlined further down the note with the use of case studies and argued consequently, need to be brought under legal domination and control.
The challenge to legal systems may occur because these technological capabilities not only translate into the generation of support to its users but also into a potential capacity for breaking the law. As suggested by Wilks, ‘[i]n most situations now imaginable, it will not be too hard to identify individuals, if there is a need to do so, behind programmes and machines. However, things may become more [difficult] as time goes on, and the simple substitution of responsible people for errant machines harder to achieve.’ Hence, the emerging issue can be expressed in one leading question. Within the medical framework, if robots and softbots as autonomous entities are able to act contrary to the law who or what should be liable for their actions?
II. Outline
The liability section has two lines. First, with the use of case studies it considers the diversity of the issues involved in the autonomous capabilities of robots and softbots within the context of the medical and rehabilitation framework. The second part of this section elaborates on questions that might arise regarding autonomous actions of these entities and eattracts to consider potential solutions to the imposed questions.
III. Case Study:
1. Assisty Makes a Mistake
Each year, 400,000 people in the US die from ventricular fibrillation. This is a type of a heart attack. A tool called ‘defibrillator’ has saved many lives. This device has the capacity to restore the heart rhythm. Recently, computerized automated defoliators were designed for unskilled operators. The patient’s medical state is monitored by the automatic mechanism, and if the mechanism decides it administers the precise number and strength of electric shocks that is necessary (in the ‘opinion’ of the machine itself) for proper treatment. The question is where to place ultimate responsibility for its misadventure.
Consider the scenario below:
As a nurse in a hospital, employs a number of automatic defibrillator (Assisty) to help her to deal with an overwhelming amount of work. Once she has been contracted to work with those tools, Assisties save many lives at the hospital. Eventually they are highly appreciated by the patients and staff and a decision has been made (by the board of directors) that Assisty can operate on its own. However, on one occasion Assisty causes damage to a patient’s heart.
2. Case: ‘Personal Assistant’ - Softbot
People who cannot see sometimes are not able to interact with the computer without assistive technologies. To overcome this barrier they use (most of the time) screen reader software and Braille displays. The question is where to place ultimate responsibility for its misadventure. Consider the scenario below:
High-profile technology company releases screen reader & researcher software called Fly with Braille. Software that do on-line research and finds the best holidays and flights options for vision impaired customers. John a blind person adopts the software. Annet serves as a user-interface agent; John channels its input through Annet. His graphical representation is elaborate: exhibits a wide variety of voice outputs, and appropriately timed pseudo-conversations with the user (John), and is convincing as an intelligent travel expert. Annet has been used to observe the flight market offer predictions and advice on which flights, and holidays to buy and go to. Annet can even be asked to book the fight or holiday packet herself via an on-line service. Over time, however, it becomes apparent that Annet’s advice is shabby, and thousands of vision impaired people who took it very seriously end up losing a great deal of money, not going for holidays or missing flights. Annet’s user, John brings a class action lawsuit.
3. Case: Droney & Johnny
In public opinion the word ‘drone’ has negative connotations of military drones, kill lists and human ‘collateral damage.’ Nevertheless, arguably computerized technology is extremely rarely the problem per se; it is the application of computerized object that is likely to cause harm. Drones can improve safety and reduce risk of harm by taking over dangerous jobs that would otherwise be done by people or other entities, such as using drones for navigating visually impaired persons. The question is where to place ultimate responsibility for its misadventure.
Consider the scenario below:
Droney performs operations for Johnny, who is visually impaired. It informs Johnny about objects on his way, informing him about free seats in the bus and so forth. Johnny communicates with Droney via voice commands. Droney recognizes Johnny’s voice and only accepts his commands. On one day however, Johnny is walking on a busy street and Droney does not hear Johnny’s command to let him know if crossing the street is safe. Droney does therefore not warn Johnny that a car is approaching. Johnny gets hit by the car and injured.
IV. Related Questions
Based on the information presented above consider the following questions:
(1) Was the factual element of the offense fulfilled by the robot or softbot itself? (a) The answer to the above question is ‘no’ (b) The answer to the above question is ‘yes’ (1a) If the answer to the first question is ‘no’ the reasoning would then look probably like this: (a) Statutory liability for defective products (a) Who owns this robot or softbot? Can the owner bear the potential liability? (b) Who employs this robot or softbot? Can the employer bear the potential liability? (c) Who uses this robot or softbot? Can the user(s) bear the potential liability? (d) Apportionment of Liability? (2) Could liability rest with the robot or softbot itself? (3) = (1b) Is the robot or softbot recognized by law, in other words does it have legal personality? (4) Has the robot or softbot the general ability to consolidate awareness of the fact that its conduct was breaking the terms of law? (a) The answer to the above question is ‘no’ (b) The answer to the above question is ‘yes’ (4a) This brings us back to the questions (1a) or new proposals for robot and softbot liability (4b) Need for new proposals for robot and softbot liability
V. Potential Answers
The discussion concerning who or what should be liable for the autonomous acts of infringement performed by robots & softbots has never generated anything more than deep intellectual divisions among scholars and legal practitioners. Some of them have argued that liability should always be charged to its human proprietor who serves as the creator of this artificial intelligence thus, he/she should become the legal instrument holding responsibility for keeping technology within the bounds of human governance and control. Arguably, this may be justifiable only if legal systems come to conclusions about which of the human beings should be legally liable for the autonomous acts of infringement performed by them.
This note, in order to provide a potential answer to this matter, will subdivide this key question into two smaller questions. The first would be, whether one may a formal application of statutory liability for defective product when assuming that any type of liability concern with robots & softbots is the result of human being (for example: the master) fault. This could be: manufacturing defect, design defect, or a failure to warn (also known as marketing defects) human beings (users) on the way how to safely and properly use this type of devise. Despite the possibility of robots & softbots acting autonomously, they still remain a human construct. As a consequence, one could argue that there is no reason why the above principles could not be satisfactory applied in respect of these types of entites. Nonetheless, softbots might self-develop and self-learn thus they are not merely a reminder of the original creation designs by their programmer or manufacturer. They are not an end product. Hence, one could argue that statutory liability for defective product could have its application in respect of robots such as, self-driving cars and drones but not in respect of softbots that have the capacity for self-changes.
And there is another question, which is if and only if, this fully autonomous softbot commit wrongs in ways that are totally untraceable and not able to be attributed to the hand of human being. As an illustration this note revisits the second case study "Personal Assistant’ - Softbot", where the autonomous Annet (personal assistant softbot) self-learns and self-develops and with the use of Internet, she books fights and holiday packets. Over time, however, thousands of vision impaired people who took it very seriously end up losing a great deal of money, not going for holidays or missing flights. Annet’s user, John brings a class action lawsuit. The question then arises, what should be the rule at that point? Who should be liable for the autonomous actions of Annet?
In some cases, the wrongful act alone is sufficient to support a finding of negligence through the application of the doctrine of res ipsa loquitur, a Latin phrase meaning ‘the thing speaks itself,’ is a type of evidence that was advanced at common law to support a plaintiff proving negligence. The different areas of case law where it is applied are various thus one sees no reason not to consider it in respect of this type if infringement. Res ipsa loquitur doctrine was recognised for the first time in Byrne v Boadle, 159 Eng Rep 299, 300-3001 (Ex 1863), English case law in which a barrel flying out of the window smashed into a pedestrian and caused injury. The Court held that the defendant (the landlord) was negligent under the principle of res ipsa loquitur even though the plaintif could not affirmatively prove the negligent conduct caused the barrel to fall.
Res ipsa loquitur applies if the following conditions are met: (1) the accident or occurrence producing the injury is of a kind which ordinarily does not happen in the absence of someone’s negligence, (2) the injuries are caused by an agency or instrumentality within the exclusive control of the defendant, and (3) the injury-causing accident or occurrence is not due to any voluntary action or contribution on the part of the plaintiff. In line with the above, the doctrine rests upon showing that the plaintiff suffered a damage that does not naturally occur and that there is no explanation for the event. However, it has been declared that the instrumentality must have been under the defendant’s exclusive control, ‘otherwise the question of proximate cause complicates the issue and destroys the presumptions because the damage may have been as easily due to the negligence of a third person.’
Considering that there might be a hundred or even a thousand of human beings involved in Annet's development and expansion, since Annet interacts with the external environment including the physical world and Internet users, each one of them may be separately or jointly the cause of the ‘negligence of a third person.’ As long as the legal systems are capable of investigating each one of the human beings involved in Annet's expansion and to allocate the liability accordingly, they will be able to address the legal issues surrounding her actions without significant moderation. However, one may argue the law is not sufficiently prepared or furnished to address this type of legal issues. This in turn brings us back to the primary question such as: who or what should be liable for the autonomous acts of Annet?
The other approach, most notably advocated by Wein, Snapper and Bostrom, would be that there are situations in which legal systems should consider SI itself responsible, presuming that law will accord legal personhood to Annet in the first place. The extent to which common law systems have in the past accorded legal rights and following duties to inanimate entities such as a corporation merely for reasons of reassigning legal responsibility to these entities, redirects one’s thought in this particular direction. There are two main types of advantages that could be derived from this argument. First, this would provide for a more coherent picture of today’s legal framework. Second, the issues of liability for their independent actions could be answered.
Bibliography
Restatement (Third) of Torts: Products Liability, § 19.
Res ipsa loquitur cases, see, for example:
Zukowsky v Brown 79 Wn 2d 586, 592, 488 P2d 269 (1971);
Horner v Northern Pac Beneficial Ass’n Hosps Inc 62 Wn 2d 351, 359, 382 P.2d 518 (1963) 'whether the doctrine applies in a given case is a question of law;' See also Metropolitan Mortgage & Sec Co v Washington Water Power 37 Wn App 241, 243, 679 P2d 943 (1984) the Court held the trial court determines that the doctrine applies or not.
FV Harper and FE Heckel, ‘Effect of Doctrine of Res Ipsa Loquitur’ (1928) 22
Illinois Law Review 725, 724-747;
JW Snapper, ‘Responsibility for Computer Based Errors’ (1985) 16
Metaphilosophy 289–295;
N Bostrom, ‘When Machines Outsmart Humans’ (2003) 35 FUTURES 763,
759 - 764;
L Wein, ‘The Responsibility of Intelligent Artifacts: Toward an
Automation Jurisprudence’ (1992) 6 Harv JL & Tech 121, 103-53;
Del Mar M and W Twining, Legal Fictions in Theory and Practice
(Springer 2015) 95-96
T Fong and I Nourbakhsh and K Dautenhahn, ‘A Survey of socially
interactive robots’ (2003) 42(3-4) Robotics and Autonomous Systems 143-166;
KC Desouza and D Swindell and KL Smith and A Sutherland and K
Fedorschak and C Coronel, ‘Local government 2035: Strategic trends and implications of new technologies (2015) 27 Technology Innovation <‘http://www.brookings.edu/~/media/research/files/papers/2015/05/29-local-government-strategic-trends -desouza/desouza.pdf> accessed 14 September 2016.
Y Wilks, ‘Responsible Computers? Invited contribution to panel on
Computers and Legal Responsibility’ (1985) In Proc. of International Joint Conference on Artificial Intelligence <ijcai.org/Past%20Proceedings/IJCAI-85-VOL2/PDF/117.pdf> accessed 14 September 2016