top of page

Who or What Should Be Liable for the Autonomous Acts of Artificial Intelligence (Softbots)? US Paten



The discussion concerning who or what should be liable for the autonomous acts of infringement performed by artificial intelligence (AI) (softbots) has never generated anything more than deep intellectual divisions among scholars and legal practitioners. Some of them have argued that liability should always be charged to its human proprietor who serves as the creator of this computational intelligence thus, he/she should become the legal instrument holding responsibility for keeping technology within the bounds of human governance and control. [1] Arguably, this may be justifiable only if legal systems come to conclusions about which of the human beings should be legally liable for the autonomous acts of infringement performed by AI.


This note, in order to provide a comprehensive answer to this matter, will subdivide this key question into two smaller questions. The first would be, what is patent infringement under the US national patent provisions? This study will focus on a particular type of infringement called - direct patent infringement. Direct patent infringement occurs when an unauthorized person makes, uses, sells, offers to sell, or imports into the US a patented invention. [2] Direct infringement is a strict liability offence. [3] This means that the infringer is not required to have the intent to infringe, nor it is required from the infringer even to know about the patent’s existence. [4] Hence, the intention of the infringer or his knowledge is irrelevant to the direct infringement context. However it still needs to be a legal or natural person that has to face the liability arising from this infringement. [5]


The second question that arises is whether one may a formal application of statutory liability for defective product when assuming that any type of liability concern with AI is the result of human being (for example: the master) fault. [6] This could be: manufacturing defect, design defect, or a failure to warn (also known as marketing defects) human beings (users) on the way how to safely and properly use this AI. [7] Despite the possibility of AI acting autonomously, they still remain a human construct. As a consequence, one could argue that there is no reason why the above principles could not be satisfactory applied in respect of AI. Though it is true that AI is a human construct it is a construct that is not self-contained. AI that generates patentable and inventive work is not a ‘device unto itself’, hence is not a ‘closed product.’ [8] This is the basic difference between for example, AI embedded in self-driving cars and AI generating invention [9] and not limited by its own body.


AI self-develops and self-learns thus it is not merely a reminder of the original creation designs by its original programmer or manufacturer. It is not an end product. Hence, one could argue that statutory liability for defective product could have its application in respect of artificial intelligence such as, self-driving cars and drones but not in respect of AI that have the capacity for self-changes, softbots. And there is another question, which is if and only if, this fully autonomous AI directly infringes the patent of others in ways that are totally untraceable and not able to be attributed to the hand of human being. The question then arises, what should be the rule at that point? Who should be liable for the autonomous actions of AI?


In some cases, the wrongful act alone is sufficient to support a finding of negligence through the application of the doctrine of res ipsa loquitur, a Latin phrase meaning ‘the thing speaks itself,’ is a type of evidence that was advanced at common law to support a plaintiff proving negligence. [10] The different areas of case law where it is applied are various thus one sees no reason not to consider it in respect of patent law infringement. Res ipsa loquitur applies if the following conditions are met: (1) the accident or occurrence producing the injury is of a kind which ordinarily does not happen in the absence of someone’s negligence, (2) the injuries are caused by an agency or instrumentality within the exclusive control of the defendant, and (3) the injury-causing accident or occurrence is not due to any voluntary action or contribution on the part of the plaintiff. [11] The doctrine rests upon showing that the plaintiff suffered a damage that does not naturally occur and that there is no explanation for the event. [12] However, it has been declared that the instrumentality must have been under the defendant’s exclusive control, ‘otherwise the question of proximate cause complicates the issue and destroys the presumptions because the damage may have been as easily due to the negligence of a third person.’ [13]


Considering that there might be a hundred or even a thousand of human beings involved in AI’s development and expansion, since AI interacts with the external environment including the physical world and the uses Internet, each one of them may be separately or jointly the cause of the ‘negligence of a third person.’ As long as the legal systems are capable of investigating each one of the human beings involved in AI’s expansion and to allocate the liability accordingly, they will be able to address the legal issues surrounding their creativity and actions without significant moderation. However, one may argue the law is not sufficiently prepared or furnished to address this type of legal issues. This in turn brings us back to the primary question such as: who or what should be liable for the autonomous acts of AI?


The other approach, most notably advocated by Wein, Snapper and Bostrom, would be that there are situations in which legal systems should consider AI itself responsible, presuming that law will accord dependent legal personhood to AI in the first place.[14] The extent to which common law systems have in the past accorded legal rights and following duties to inanimate entities such as a vessel merely for reasons of reassigning legal responsibility to these entities, (see discussion in my previous blog posts on that particular topic) redirects one’s thought in this particular direction. [15] There is one main type of advantage that could be derived from this argument. Providing for a more coherent picture of today’s legal framework.


References:

(Oscola style of referencing)


Image: www.stockfreeimages.com


1. RD Clifford, ‘Intellectual Property in the Era of the Computer Program: Will the True Creator Please Stand Up (1996) 71 Tul L Rev 1695, 1675-1705; M Perry and T Margoni, ‘From music tracks to Google Maps: Who owns computer generated works?’(2010) 26(6) CLSR 621-629; P Samuelson, ‘Allocating Ownership Rights in Computer Generated Works’ 47 U Pitt LR 1189, 1208.

2. Title 35 of the US Code para 271(a).

3. Jurgens v CBK Ltd 80 F3D 1566, 1570 n2 (Fed Cir 1996) ‘infringement is a strict liability offence’

4. Hilton Davis Chemical Co v Warner-Jenkins Co 62 F3d 1512, 1519 (Fed Cir 1995)’Intent is not an element of infringement (…) A patent may exclude others from practicing the claimed invention, regardless of whether the infringer even know of the patent;’ see also Warner-Jenkinson Co v Hilton-davis Chemical Co 520 US 17 (1997); Florida Prepaid Postsecondary Education Expense Board v College Savings Banks 527 US 627, 645 (1999) ‘Actions predicated on direct patent infringement (…) do not require any showing of intent to infringe;’ RT Holzmann, Infringement of the United States Patent Right: A Guide for Executives and Attorneys (Greenwood Publishing Group 1995) 18;

5. 35 USC § 271(a) ‘(…) whoever without authority makes, uses, offers to sell, or sells any patented invention (…) infringes the patent.’ According to 1 Chapter 1 § 1 of USC ‘whoever include corporations, companies, associations, firms, partnerships, societies, and joint stock companies, as well as individuals.’

6. Restatement (Third) of Torts: Products Liability, § 19.

7. (Restatement of the Law, Third, Torts: Products Liability).

8. Section 508 USPTO Reference Guide 1194.25 Self-Contained, Closed Products.

9. Invention designed by AI however, could be a patentable invention and closed products.

10. Res ipsa loquitur doctrine was recognised for the first time in Byrne v Boadle, 159 Eng Rep 299, 300-3001 (Ex 1863), English case law in which a barrel flying out of the window smashed into a pedestrian and caused injury. The Court held that the defendant was negligent under the principle of res ipsa loquitur even though the plaintif could not affirmatively prove the negligent conduct caused the barrel to fall.

11. See, Zukowsky v Brown 79 Wn 2d 586, 592, 488 P2d 269 (1971); Horner v Northern Pac Beneficial Ass’n Hosps Inc 62 Wn 2d 351, 359, 382 P.2d 518 (1963) 'whether the doctrine applies in a given case is a question of law;' See also Metropolitan Mortgage & Sec Co v Washington Water Power 37 Wn App 241, 243, 679 P2d 943 (1984) the Court held the trial court determines that the doctrine applies or not.

12. Goldman etc Bottling Co v Sindell (1922) 140 Md $88, 117 Atl 866.

13. FV Harper and FE Heckel, ‘Effect of Doctrine of Res Ipsa Loquitur’ (1928) 22 Illinois Law Review 725, 724-747; Ash v Childs Dining Hall Co (1918) 231 Mass 86, 120 NE 396; Hoopman v Seattle (1922) 122 Wash 379, 210 Pac 783; Wheeler v Koch Gathering Sys 131 F3d 898, 903 (10th Cir Okla 1997).

14. JW Snapper, ‘Responsibility for Computer Based Errors’ (1985) 16 Metaphilosophy 289–295; N Bostrom, ‘When Machines Outsmart Humans’ (2003) 35 FUTURES 763, 759 - 764; L Wein, ‘The Responsibility of Intelligent Artifacts: Toward an Automation Jurisprudence’ (1992) 6 Harv JL & Tech 121, 103-53; See also: AM Din, Arms and Artificial Intelligence: Weapon and Arms Control Applications of Advanced Computing (Oxford University Press, 1987) 45-46; RE Smith, ‘Idealizations of Uncertainty, and Lessons from Artificial Intelligence’ (2015) Discussion Paper No. 2015-50, 3 <http://www.economics-ejournal.org/economics/discussionpapers/2015-50> accessed 1 March 2016.

15. Del Mar M and W Twining, Legal Fictions in Theory and Practice (Springer 2015) 95-96; United States v Schooner Little Charles, 1 Brock Rep 347,354 (1818); Tucker v Alexandroff, 183 US 424 (1902) and case law concerning corporate legal personhood: Santa Clara County v Southern Pacific Railroad Company (1886) 118 US 394, 396; Pembina Consolidated Silver Mining Co v Pennsylvania (1888) 125 US 181; Trustees of Dartmouth College v Woodward (1819) 17 US (4 Wheat) 518.

Single post: Blog_Single_Post_Widget
bottom of page