artificial intelligence and the limits of legal … · artificial intelligence and the limits of...

26
ARTIFICIAL INTELLIGENCE AND THE LIMITS OF LEGAL PERSONALITY SIMON CHESTERMAN* Abstract As articial intelligence (AI) systems become more sophisticated and play a larger role in society, arguments that they should have some form of legal personality gain credence. The arguments are typically framed in instrumental terms, with comparisons to juridical persons such as corporations. Implicit in those arguments, or explicit in their illustrations and examples, is the idea that as AI systems approach the point of indistinguishability from humans they should be entitled to a status comparable to natural persons. This article contends that although most legal systems could create a novel category of legal persons, such arguments are insucient to show that they should. Keywords: public international law, articial intelligence, legal personality, natural person, juridical person, corporation, corporate veil, copyright, patent. In 2021 the Bank of England will complete its transition from paper to polymer with the release of a new 50 pound note. A public selection process saw almost quarter of a million nominations for the face of the new note; the nal decision, announced in July 2019, was that Alan Turing would be featured. Turing was a hero for his codebreaking during the Second World War. He also helped establish the discipline of computer science, laying the foundations for what we now call articial intelligence (AI). 1 Perhaps his best-known contribution, however, is the eponymous test for when true intelligencehas actually been achieved. Turing modelled it on a parlour game popular in 1950. A man and a woman sit in a separate room and provide written answers to questions; the other participants have to guess who provided which answer. Turing posited that a similar imitation gamecould be played with a computer. When a machine * National University of Singapore, [email protected]. 1 For a discussion of attempts to dene AI, see SJ Russell and P Norvig, Articial Intelligence: A Modern Approach (3rd edn, Prentice Hall 2010) 15. Four broad approaches can be identied: acting humanly (the Turing Test), thinking humanly (modelling cognitive behaviour), thinking rationally (building on the logicist tradition), and acting rationally (a rational-agent approach favoured by Russell and Norvig as it is not dependent on a specic understanding of human cognition or an exhaustive model of what constitutes rational thought). © The Author(s) 2020. Published by Cambridge University Press for the British Institute of International and Comparative Law. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. [ICLQ vol 69, October 2020 pp 819844] doi:10.1017/S0020589320000366 use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366 Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

Upload: others

Post on 29-Jan-2021

15 views

Category:

Documents


0 download

TRANSCRIPT

  • ARTIFICIAL INTELLIGENCEAND THE LIMITS OF LEGAL PERSONALITY

    SIMON CHESTERMAN*

    Abstract As artificial intelligence (AI) systems become moresophisticated and play a larger role in society, arguments that theyshould have some form of legal personality gain credence. Thearguments are typically framed in instrumental terms, with comparisonsto juridical persons such as corporations. Implicit in those arguments, orexplicit in their illustrations and examples, is the idea that as AI systemsapproach the point of indistinguishability from humans they should beentitled to a status comparable to natural persons. This article contendsthat although most legal systems could create a novel category of legalpersons, such arguments are insufficient to show that they should.

    Keywords: public international law, artificial intelligence, legal personality, naturalperson, juridical person, corporation, corporate veil, copyright, patent.

    In 2021 the Bank of England will complete its transition from paper to polymerwith the release of a new 50 pound note. A public selection process saw almostquarter of a million nominations for the face of the new note; the final decision,announced in July 2019, was that Alan Turing would be featured. Turing was ahero for his codebreaking during the Second World War. He also helpedestablish the discipline of computer science, laying the foundations for whatwe now call artificial intelligence (AI).1 Perhaps his best-known contribution,however, is the eponymous test for when true ‘intelligence’ has actually beenachieved.Turing modelled it on a parlour game popular in 1950. A man and a woman

    sit in a separate room and provide written answers to questions; the otherparticipants have to guess who provided which answer. Turing posited that asimilar ‘imitation game’ could be played with a computer. When a machine

    * National University of Singapore, [email protected] For a discussion of attempts to define AI, see SJ Russell and P Norvig, Artificial Intelligence:

    A Modern Approach (3rd edn, Prentice Hall 2010) 1–5. Four broad approaches can be identified:acting humanly (the Turing Test), thinking humanly (modelling cognitive behaviour), thinkingrationally (building on the logicist tradition), and acting rationally (a rational-agent approachfavoured by Russell and Norvig as it is not dependent on a specific understanding of humancognition or an exhaustive model of what constitutes rational thought).

    © The Author(s) 2020. Published by Cambridge University Press for the British Institute of Internationaland Comparative Law. This is an Open Access article, distributed under the terms of the Creative CommonsAttribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use,distribution, and reproduction in any medium, provided the original work is properly cited.

    [ICLQ vol 69, October 2020 pp 819–844] doi:10.1017/S0020589320000366

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://orcid.org/0000-0002-3599-4573mailto:[email protected]://creativecommons.org/licenses/by/4.0/https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • could fool people into believing that it was human, we might properly say that itwas intelligent.2

    Early successes along these lines came in the 1960s with programs like Eliza.Users were told that Eliza was a psychotherapist who communicated throughwords typed into a computer. In fact, ‘she’ was an algorithm using a simplelist-processing language. If the user typed in a recognised phrase, it would bereframed as a question. So after entering ‘I’m depressed,’ Eliza might reply‘Why do you say that you are depressed?’ If it didn’t recognise the phrase,the program would offer something generic, like ‘Can you elaborate on that?’Even when they were told how it worked, some users insisted that Eliza had‘understood’ them.3

    Parlour games aside, why should it matter if a computer is ‘intelligent’? Forseveral decades, the Turing Test was associated more with the question ofwhether AI itself was possible, rather than the legal status of such an entity.Yet it is commonly invoked in discussions of legal personality for AI, fromLawrence Solum’s seminal 1992 article onwards.4 Though no longerregarded as a serious measure of modern AI in a technical sense, the TuringTest’s longevity as a trope points to a tension in debates over personality thatis often overlooked.As AI systems become more sophisticated and play a larger role in society,

    there are at least two discrete reasons why they might be recognised as personsbefore the law. The first is so that there is someone to blame when things gowrong. This is presented as the answer to potential accountability gapscreated by their speed, autonomy, and opacity.5 A second reason forrecognising personality, however, is to ensure that there is someone to rewardwhen things go right. A growing body of literature examines ownership ofintellectual property created by AI systems, for example.The tension in these discussions is whether personhood is granted for

    instrumental or inherent reasons. Arguments are typically framed ininstrumental terms, with comparisons to the most common artificial legalperson: the corporation. Yet implicit in many of those arguments, or explicitin their illustrations and examples, is the idea that as AI systems approach thepoint of indistinguishability from humans—that is, when they pass Turing’sTest—they should be entitled to a status comparable to natural persons.

    2 AM Turing, ‘Computing Machinery and Intelligence’ (1950) 59 Mind 433.3 RS Wallace, ‘The Anatomy of ALICE’ in R Epstein, G Roberts, and G Beber (eds), Parsing

    the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer(Springer 2009) 184–5.

    4 LBSolum, ‘Legal Personhood for Artificial Intelligences’ (1992) 70NCLRev 1235–7. Solumhimself credits Christopher Stone as first mooting the possibility in a footnote two decades earlier:CD Stone, ‘Should Trees Have Standing? Towards Legal Rights for Natural Objects’ (1972) 45SCalLRev 456 n 26.

    5 See S Chesterman, ‘Artificial Intelligence and the Problem of Autonomy’ (2020) 1 NotreDame Journal of Emerging Technologies 210; S Chesterman, ‘Through a Glass, Darkly:Artificial Intelligence and the Problem of Opacity’ (2021) AJCL (forthcoming).

    820 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • Until recently, such arguments were all speculative. Then in 2017 SaudiArabia granted ‘citizenship’ to the humanoid robot Sophia6 and an onlinesystem with the persona of a seven-year-old boy was granted ‘residency’ inTokyo.7 These were gimmicks—Sophia, for example, is essentially a chatbotwith a face.8 In the same year, however, the European Parliament adopted aresolution calling on its Commission to consider creating ‘a specific legalstatus for robots in the long run, so that at least the most sophisticatedautonomous robots could be established as having the status of electronicpersons responsible for making good any damage they may cause, andpossibly applying electronic personality to cases where robots makeautonomous decisions or otherwise interact with third parties independently.’9

    This article begins with the most immediate challenge, which is whethersome form of juridical personality would fill a responsibility gap or beotherwise advantageous to the legal system. Based on the history ofcorporations and other artificial legal persons, it does not seem in doubt thatmost legal systems could grant AI systems a form of personality; the moreinteresting questions are whether they should and what content thatpersonality might have.The second section then turns to the analogy with natural persons. It might

    seem self-evident that a machine could never be a natural person. Yet forcenturies slaves and women were not recognised as full persons either. If onetakes the Turing Test to its logical, Blade Runner, conclusion, it is possible thatAI systems truly indistinguishable from humans might one day claim the samestatus. Although arguments about ‘rights for robots’ are presently confined tothe fringes of the discourse, this possibility is implicit in many of thearguments in favour of AI systems owning the intellectual property that theycreate.Taken seriously, moreover, the idea that AI systems could equal humans

    suggests a third reason for contemplating personality. For once equality isachieved, there is no reason to assume that AI advances would stop there.Though general AI remains science fiction for the present, it invitesconsideration whether legal status could shape or constrain behaviour if orwhen humanity is surpassed. Should it ever come to that, of course, thequestion might not be whether we recognise the rights of a general AI, butwhether it recognises ours.

    6 O Cuthbert, ‘Saudi Arabia Becomes First Country to Grant Citizenship to a Robot’ (ArabNews 26 October 2017).

    7 A Cuthbertson, ‘Artificial Intelligence “Boy” Shibuya Mirai Becomes World’s First AI Botto Be Granted Residency’ (Newsweek, 6 November 2017).

    8 D Gershgorn, ‘Inside the Mechanical Brain of the World’s First Robot Citizen’ Quartz (12November 2017).

    9 European Parliament Resolution with Recommendations to the Commission on Civil LawRules on Robotics (2015/2103(INL)) (European Parliament, 16 February 2017) para 59(f).

    AI and the Limits of Legal Personality 821

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • I. JURIDICAL PERSONALITY: A BODY TO KICK?

    Legal personality is fundamental to any system of laws. The question of whocan act, who can be the subject of rights and duties, is a precursor to almostevery other issue. Yet close examination of these foundations revealssurprising uncertainty and disagreement. Despite this, as John Deweyobserved in 1926, ‘courts and legislators do their work without suchagreement, sometimes without any conception or theory at all’ regarding thenature of personality. Indeed, he went on, recourse to theory has ‘more thanonce operated to hinder rather than facilitate the adjudication of a specialquestion of right or obligation’.10

    In practice, the vast majority of legal systems recognise two forms of legalperson: natural and juridical. Natural persons are recognised because of thesimple fact of being human.11 Juridical persons, by contrast, are non-humanentities that are granted certain rights and duties by law. Corporationsand other forms of business associations are the most common examples,but many other forms are possible. Religious, governmental, andintergovernmental entities may also act as legal persons at the national andinternational level.It is telling that these are all aggregations of human actors, though there are

    examples of truly non-human entities being granted personhood. In addition tothe examples mentioned in the introduction, these include temples in India,12

    a river in New Zealand,13 and the entire ecosystem of Ecuador.14 Thereseems little question that a State could attribute some kind of personality tonew entities like AI systems;15 if that happens, recognition would likely beaccorded by other States also.16

    A. Theories of Juridical Personality

    Scholars and law reform bodies have already suggested attributing AI systemswith some form of legal personality to help address liability questions, such asan automated driving system entity in the case of driverless cars whose

    10 J Dewey, ‘The Historic Background of Corporate Legal Personality’ (1926) 35 YaleLJ 660.11 This presumes, of course, agreement on the meaning of ‘human’ and terms such as birth and

    death. See N Naffine, ‘Who Are Law’s Persons? From Cheshire Cats to Responsible Subjects’(2003) 66 MLR 346.

    12 See eg Shiromani Gurdwara Prabandhak Committee, Amritsar v Shri Somnath Dass AIR2000 SC 1421 (Supreme Court of India).

    13 Te Awa Tupua (Whanganui River Claims Settlement) Act 2017 (New Zealand), section14(1). This followed designation of the Te Urewera National Park as ‘a legal entity, [with] all therights, powers, duties, and liabilities of a legal person’. Te UreweraAct 2014 (NewZealand), section11(1). 14 Constitution of the Republic of Ecuador 2008 (Ecuador) art 10.

    15 For a discussion of the limits of what can be a legal person, see VAJ Kurki, A Theory of LegalPersonhood (Oxford University Press 2019) 127–52.

    16 See eg Bumper Development Corp. v Commissioner of Police for the Metropolis [1991] 1WLR 1362 (recognising legal personality of an Indian temple under English law).

    822 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • behaviour may not be under the control of their ‘driver’ or predictable by theirmanufacturer or owner.17 A few writers have gone further, arguing thatprocedures need to be put in place to try robot criminals, with provision for‘punishment’ through reprogramming or, in extreme cases, destruction.18

    These arguments suggest an instrumental approach to personality, butscholarly explanations of the most common form of juridical person—thecorporation—show disparate justifications for its status as a separate legalperson that may help answer the question of whether that status should beextended to AI systems also.The aggregate theory, sometimes referred to as the contractarian or symbolist

    theory, holds that a corporation is a device created by law to allow natural personswho organise themselves as a group to reflect that organisation in their legalrelations with other parties. Group members could establish individualcontractual relations with those other parties limiting liability and so on, but thecorporate form enables them to do so collectively at a lower cost.19 The theory hasbeen criticised and is, in any case, the least applicable to AI systems.20

    The fiction and concession theories of corporate personality have separateorigins21 but amount to the same thing: corporations have personalitybecause a legal system chooses to give it to them. As the US Supreme Courtobserved in 1819, a corporation ‘is an artificial being, invisible, intangible,and existing only in contemplation of law’.22 Personality is granted toachieve policy ends, such as encouraging entrepreneurship, or to contributeto the coherence and stability of the legal system, such as through theperpetuity of certain entities. The purposive aspect used to be more evidentwhen personality was explicitly granted through a charter or legislation; inthe course of the twentieth century this became a mere formality.23 Thesepositivist accounts most closely align with legislative and judicial practice inrecognising personality, and could encompass its extension to AI systems.The realist theory, by contrast, holds that corporations are neither fictions nor

    mere symbols but objectively real entities that pre-exist conferral of personality

    17 See eg Changing Driving Laws to Support Automated Vehicles (Policy Paper) (NationalTransport Commission, May 2018) para 1.5; Automated Vehicles: A Joint PreliminaryConsultation Paper (Law Commission, Consultation Paper No 240; Scottish Law Commission,Discussion Paper No 166 (2018) para 4.107. cf Chesterman, ‘Autonomy’ (n 5) 225.

    18 See eg G Hallevy, Liability for Crimes Involving Artificial Intelligence Systems (Springer2015); Y Hu, ‘Robot Criminals’ (2019) 52 UMichJLReform 487.

    19 RCoase, ‘TheNature of the Firm’ (1937) 4 Economica 386. cf VMorawetz,A Treatise on theLaw of Private Corporations (Little, Brown 1886) 2 (‘the fact remains self-evident that a corporationis not in reality a person or a thing distinct from its constituent parts. The word corporation is but acollective name for the corporators.’).

    20 N Banteka, ‘Artificially Intelligent Persons’ (2020) 58 HousLR (forthcoming).21 Dewey (n 10) 665–9.22 Trustees of Dartmouth Coll. v Woodward 17 US 518, 636 (1819).23 See eg CE Amsler, RL Bartlett and CJ Bolton, ‘Thoughts of Some British Economists on

    Early Limited Liability and Corporate Legislation’ (1981) 13 History of Political Economy 774;G Dari-Mattiacci et al, ‘The Emergence of the Corporate Form’ (2017) 33 JLEcon&Org 193.

    AI and the Limits of Legal Personality 823

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • by a legal system. Though they may have members, they act independently andtheir actions may not be attributable to those members. At its most extreme, it isargued that corporations are not only legal but also moral persons.24 This theorytends to be favoured more by theorists and sociologists than legislators andjudges, but echoes the tension highlighted in the introduction: that legalpersonality is not merely bestowed but deserved. In practice, however, actualrecognition as a person before the law remains in the gift of the State.25

    The end result is, perhaps, that Dewey was correct a century ago: ‘“person”signifies what law makes it signify’.26 Though the question of personality is abinary, however—recognised or not—the content of that status is a spectrum.Setting aside for the moment the idea that an AI system might deserverecognition as a person, a State’s decision to grant it should be guided by therights and duties that would be recognised also.

    B. The Content of Legal Personality

    Legal personality brings with it rights and obligations, but these need not be thesame for all persons within a legal system. Even among natural persons, thestruggle for equal rights of women, ethnic or religious minorities, and otherdisadvantaged groups reflects this truth.27

    It is possible, for example, to grant only rights without obligations. This hastended to be the approach in giving personhood to nature—both in theory, whenit was first advocated in 1972,28 and in practice, such as in the Constitution ofEcuador.29 One could argue that such ‘personality’ is merely an artifice to avoidproblems of standing: enabling human individuals to act on behalf of a non-human rights holder, rather than requiring them to establish standing in theirown capacity.30 In any case, it seems inapposite to the reasons forconsidering personality of AI systems.On the other hand, AI legal personality could come only with obligations.

    That might seem superficially attractive, but insofar as those obligations areintended to address accountability gaps there would be some obviousproblems. Civil liability typically leads to an award of damages, for example,which can only be paid if the wrongdoer is capable of owning property.31 One

    24 P French, ‘‘The Corporation as a Moral Person’ (1979) 16(3) AmPhilQ 207.25 K Iwai, ‘Persons, Things and Corporations: The Corporate Personality Controversy and

    Comparative Corporate Governance’ (1999) 47 AJCL 583; SM Watson, ‘The Corporate LegalPerson’ (2019) 19 JCLS 137. 26 Dewey (n 10) 655.

    27 JJ Bryson, ME Diamantis and TD Grant, ‘Of, for, and by the People: The Legal Lacuna ofSynthetic Persons’ (2017) 25 Artificial Intelligence and Law 280. 28 Stone (n 4).

    29 Constitution of the Republic of Ecuador, arts 71–74.30 C Rodgers, ‘A New Approach to Protecting Ecosystems’ (2017) 19 EnvLRev 266. In New

    Zealand, by contrast, trustees were established to act on behalf of the environmental features givenpersonality.

    31 cf Liability for Artificial Intelligence and Other Emerging Digital Technologies (EU ExpertGroup on Liability and New Technologies 2019) 38.

    824 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • could imagine scenarios in which those payments were made from a centralfund, though this would be more akin to compulsory insurance regimesproposed as an alternative means of addressing the liability question.32

    ‘Personality’ would be a mere formality.In the case of corporations, personality typically means the capacity to sue

    and be sued, to enter into contracts, to incur debt, to own property, and to beconvicted of crimes. On the rights side, the extent to which corporationsenjoy constitutional protections comparable to natural persons is the subjectof ongoing debate. Though the United States has arguably granted the mostprotections to corporate entities, even there a line has been drawn atguarantees such as the right against self-incrimination.33 Typically, juridicalpersons will have fewer rights than natural ones. (A similar situation obtainsin international law, where States enjoy plenary personality and internationalorganisations may have varying degrees of it.34)

    1. Private law

    The ability to be sued is one of the primary attractions of personality for AIsystems, as the European Parliament acknowledged.35 This presumes, ofcourse, that there are meaningful accountability gaps that can and should befilled. These gaps are often overstated.36 A different reason for warinessabout such a remedy is that, even if it did serve a gap-filling function,granting personality to AI systems would also shift responsibility undercurrent laws away from existing legal persons. Indeed, it would create anincentive to transfer risk to such electronic persons in order to shield naturaland traditional juridical ones from exposure.37 That is a problem withcorporations also, which may be used to protect investors from liabilitybeyond the fixed sum of their investment—indeed, that is often the point ofusing a corporate vehicle in the first place. The reallocation of risk is justifiedon the basis that it encourages investment and entrepreneurship.38 Safeguardstypically include a requirement that the name of a limited liability entity includethat status in its name (‘Ltd’, ‘LLC’, etc) and the possibility of piercing the

    32 See eg DA Crane, KD Logue and BC Pilz, ‘A Survey of Legal Issues Arising from theDeployment of Autonomous and Connected Vehicles’ (2017) 23 Michigan Telecommunicationsand Technology Law Review 256–9. cf KS Abraham and RL Rabin, ‘Automated Vehicles andManufacturer Responsibility for Accidents: A New Legal Regime for a New Era’ (2019) 105VaLRev 127.

    33 SA Trainor, ‘A Comparative Analysis of a Corporation’s Right Against Self-Incrimination’(1994) 18 FordhamIntlLJ 2139. But see Citizens United v Federal Election Commission, 558 US310 (US Supreme Court, 2010).

    34 S Chesterman, ‘Does ASEAN Exist? The Association of Southeast Asian Nations as anInternational Legal Person’ (2008) XII SYBIL 199. 35 See above (n 9).

    36 See Chesterman, ‘Autonomy’ (n 5); Chesterman, ‘Opacity’ (n 5).37 Bryson, Diamantis and Grant (n 27) 287.38 FH Easterbrook and DR Fischel, ‘Limited Liability and the Corporation’ (1985) 52

    UChiLRev 89.

    AI and the Limits of Legal Personality 825

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • corporate veil in limited circumstances to prevent abuse of the form.39 In thecase of AI systems, similar veil-piercing mechanisms could be developed—though if a human were manipulating AI in order to protect him- or herselffrom liability, the ability to do so might suggest that the AI system inquestion was not deserving of its separate personhood.40

    Entry into contracts is occasionally posited as a reason to grant AI systemspersonality.41 Yet the use of electronic agents to conclude binding agreementsis hardly new. High-frequency trading, for example, relies on algorithmsconcluding agreements with other algorithms on behalf of traditionalpersons.42 Though the autonomy of AI systems may challenge application ofexisting doctrine to such practices—notably, when something goes wrong,such as a mistake—this is still resolvable without recourse to new legalpersons.43

    Taking on debt and owning property would be necessary incidents of theability to be sued and enter into contracts.44 The possibility that AI systemscould accumulate wealth raises the question of whether or how they might betaxed. Taxation of robots has been proposed as a means of addressing thediminished tax base and displacement of workers anticipated as a result ofautomation.45 Bill Gates, among others, has suggested that such robots—orthe companies that own them—should be taxed.46 Industry representativeshave argued that this would have a negative impact on competitivenessand thus far it has not been adopted.47 An alternative is to look not at themachines but at the position of companies abusing market position, withpossibilities including more aggressive taxing of profits or requirements for

    39 D Millon, ‘Piercing the Corporate Veil, Financial Responsibility, and the Limits of LimitedLiability’ (2007) 56 EmoryLJ 1305.

    40 J Turner, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan 2019) 193.41 See eg S Chopra and LF White, A Legal Theory for Autonomous Artificial Agents (University

    of Michigan Press 2011) 160.42 See eg T �Cuk andA vanWaeyenberge, ‘European Legal Framework for Algorithmic andHigh

    Frequency Trading (Mifid 2 and MAR) A Global Approach to Managing the Risks of the ModernTrading Paradigm’ (2018) 9 EJRR 146.

    43 See eg Quoine Pte Ltd v B2C2 Ltd [2020] SGCA(I) 2 (Singapore Court of Appeal) paras97–128; Chesterman, ‘Autonomy’ (n 5) 243–4.

    44 Some argue that this is the most important function of separate legal personality forcorporations: H Hansmann and R Kraakman, ‘The Essential Role of Organizational Law’ (2000)110 Yale LJ 387. cf H Tjio, ‘Lifting the Veil on Piercing the Veil’ [2014] Lloyd’s Maritime andCommercial Law Quarterly 19. It is conceivable that AI systems lacking the ability to ownproperty could still be subject to certain forms of legal process, such as injunctions, and couldoffset debts through their ‘labour’.

    45 BAKing, THammond and JHarrington, ‘Disruptive Technology: EconomicConsequences ofArtificial Intelligence and the Robotics Revolution’ (2017) 12(2) Journal of Strategic Innovation andSustainability 53.

    46 KJ Delaney, ‘The Robot that Takes Your Job Should Pay Taxes, says Bill Gates’ Quartz(18 February 2017).

    47 G Prodhan, ‘European Parliament Calls for Robot Law, Rejects Robot Tax’ Reuters(16 February 2017); L Summers, ‘Robots Are Wealth Creators and Taxing Them Is Illogical’Financial Times (6 March 2017).

    826 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • distributed share ownership.48 In any case, taxation of AI systems—like theability to take on debt and own property—would follow rather than justifygranting them personality.49 (The question of AI systems owning theircreations will be considered in the next section.50)In addition to owning property, AI systems might also be called on to

    manage it. In 2014, for example, it was announced that a Hong Kongventure capital firm had appointed a computer program called Vital to itsboard of directors.51 As with the Saudi Arabian government’s awarding ofcitizenship, this was more style than substance—as a matter of Hong Konglaw the program was not appointed to anything; in an interview some yearslater, the managing partner conceded that the company merely treated Vitalas a member of the board with observer status.52 It is possible that humandirectors might delegate some responsibility to an AI system, but undermost corporate law regimes they cannot absolve themselves of the ultimateresponsibility for managing the organisation.53 Most jurisdictions requirethat those directors be natural persons, though in some it is possible for ajuridical person—typically another corporation—to serve on the board.54

    Shawn Bayern has gone further, arguing that loopholes in US businessentity law could be used to create limited liability companies, with nohuman members at all.55 This requires a somewhat tortured interpretation ofthat law—a natural person creates a company, adds an AI system as a member,then resigns56—but suggests the manner in which legal personality might beadapted in the future.

    2. Criminal law

    A final quality of legal personality is the most visceral and worthy of someelaboration: the ability to be punished. If given legal personality comparableto a corporation, there seems little reason to argue over whether an AI systemcould be prosecuted under the criminal law. Provided actus reus and mens rea

    48 ‘Why Taxing Robots Is Not a Good Idea’ Economist (25 February 2017).49 cf L Floridi, ‘Robots, Jobs, Taxes, and Responsibilities’ (2017) 30 Philosophy& Technology

    1. 50 See below section II.B.51 RWile, ‘AVenture Capital Firm Just Named anAlgorithm to Its Board of Directors’Business

    Insider (13 May 2014).52 N Burridge, ‘AI Takes Its Place in the Boardroom’ Nikkei Asian Review (25 May 2017).53 F Möslein, ‘Robots in the Boardroom: Artificial Intelligence and Corporate Law’ in W

    Barfield and U Pagallo (eds), Research Handbook on the Law of Artificial Intelligence (EdwardElgar 2018) 658–60.

    54 See eg Personen- und Gesellschaftsrecht (PGR) 1926 (Liechtenstein) art 344; CompaniesOrdinance 2014 (HK), section 457. This was also possible under English law until 2015. Seenow Small Business, Enterprise and Employment Act 2015 (UK), section 87.

    55 S Bayern, ‘Of Bitcoins, IndependentlyWealthy Software, and the Zero-Member LLC’ (2014)108 NWULRev 1495–500.

    56 S Bayern, ‘The Implications of Modern Business-Entity Law for the Regulation ofAutonomous Systems’ (2015) 19 Stanford Technology Law Review 101.

    AI and the Limits of Legal Personality 827

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • are established,57 such an entity could be fined or have its property seized; alicence to operate could be suspended or revoked. In some jurisdictions, awinding up order can be made against a juridical person; where that is notavailable, a fine sufficiently large to bankrupt the entity may have the sameeffect. In an extreme case, one could imagine a ‘robot criminal’ beingdestroyed. But would this be desirable and would it be effective?The most commonly articulated reasons for criminal punishment are

    retribution, incapacitation, deterrence, and rehabilitation.58 Retribution is theoldest reason for punishment, sublimating the victim’s desire for revenge intoa societal demonstration that wrongs have consequences.59 Calibration of thoseconsequences was at its most literal in the lex talionis: an eye for an eye, a toothfor a tooth. The demonstrative effect of fining a corporation—or an electronic‘person’—may be preferable to a crime otherwise going unpunished.60

    The penal system can also be used to incapacitate those convicted of crimes,physically preventing them from reoffending. Typically this is through varyingforms of incarceration, but may also include exile, amputation of limbs,castration, and execution. In the case of corporations, it may includewithdrawal of a licence to operate or a compulsory winding up order.61 Heredirect analogies with the treatment of dangerous animals and machinery canbe made, although measures such as putting down a vicious dog ordecommissioning a faulty vehicle are administrative rather than penal and donot depend on determinations of ‘guilt’.62 In some jurisdictions, children andthe mentally ill may be deemed incapable of committing crimes, yet theymay still be detained by the State if judged to be a danger to themselves orthe community.63 Such individuals do not lose their personality; in the caseof AI systems, it is not necessary to give them personality in order to imposemeasures akin to confinement if a product can be recalled or a licence revoked.

    57 For corporations this has sometimes proved a difficult, but not insurmountable challenge. SeeVS Khanna, ‘Corporate Criminal Liability: What Purpose Does It Serve?’ (1996) 109 HarvLRev1513; JW Yockey, ‘Beyond Yates: From Engagement to Accountability in Corporate Crime’(2016) 12 New York University Journal of Law and Business 412–13; SW Buell, ‘CriminallyBad Management’ in J Arlen (ed), Research Handbook on Corporate Crime and FinancialMisdealing (Edward Elgar 2018) 59. The fact that corporations are capable of violating criminallaws despite lacking free will or moral responsibility presumably dispenses with this as anargument against criminal responsibility of AI systems.

    58 It is arguable that the symbolic role of criminal law need not require actual punishment—it isnot uncommon to have laws that are unenforced in practice. Yet this typically relies on an explicit orimplicit decision not to investigate or prosecute specific crimes, rather than acceptance that a class ofactors cannot be punished at all.

    59 Denunciation is sometimes presented as a standalone justification for punishment in its ownright. See B Wringe, An Expressive Theory of Punishment (Palgrave Macmillan 2016).

    60 C Mulligan, ‘Revenge Against Robots’ (2018) 69 SCLRev 579; Hu (n 18) 503–7.61 WR Thomas, ‘Incapacitating Criminal Corporations’ (2019) 72 VandLRev 905.62 See eg D Legge and S Brooman, Law Relating to Animals (Cavendish Publishing 2000).63 A Loughnan,Manifest Madness: Mental Incapacity in the Criminal Law (Oxford University

    Press 2012).

    828 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • Deterrence is a more recent justification for punishment, premised onthe rationality of offenders. By structuring penalties, it imposes costs onbehaviour that are intended to outweigh any potential benefits. The abilityto reduce criminality to economic analysis may seem particularly applicableto both corporations and AI systems. Yet in the case of the former theincentives are really aimed at human managers who might otherwise act inconcert through the corporation for personal as well as corporate gain.64 Inthe case of an AI system, the deterrent effect of a fine would shape behaviouronly if its programming sought to maximise economic gain without regard forthe underlying criminal law itself.A final rationale for punishment is rehabilitation. Like incapacitation and

    deterrence, it is forward-looking and aims to reduce recidivism. Unlikeincapacitation, however, it seeks to influence the decision to offend ratherthan the ability to do so;65 unlike deterrence, that influence is intended tooperate intrinsically rather than extrinsically.66 Rehabilitation in respect ofnatural persons often appears to be embraced more in theory than in practice;in the United States in particular, it fell from favour in the 1970s.67 Withrespect to corporations, however, the clearer levers of influence haveencouraged experimentation with narrowly tailored penalties that aim toencourage good behaviour as well as discourage bad.68 Such an approachmight seem well suited to AI systems, with violations of the criminal lawbeing errors to be debugged rather than sins to be punished.69 Indeed, theeducative aspect of rehabilitation has been directly analogised to machinelearning in a book-length treatise on the topic.70 Yet neither legal personalitynor the coercive powers of the State should be necessary to ensure thatmachine learning leads to outputs that do not violate the criminal law.

    C. No Soul to Be Damned

    While arguments justifying liability of corporations tend to be instrumental,it is striking how the emerging literature on ‘robot criminals’ slides intoanthropomorphism. The very term suggests a special desire to hold humanoidAI systems to a higher standard than, say, household appliances with varying

    64 A Hamdani and A Klement, ‘Corporate Crime and Deterrence’ (2008) 61 StanLRev 271. cfRA Guttman, ‘Effective Compliance Means Imposing Individual Liability’ (2018) 5 EmoryCorporate Governance and Accountability Review 77.

    65 J Bentham, ‘Panopticon Versus New South Wales’ in J Bowring (ed), The Works of JeremyBentham (William Tait 1843) vol 4, 174.

    66 See T Ward and S Maruna, Rehabilitation (Routledge 2007).67 See eg AWAlschuler, ‘The Changing Purposes of Criminal Punishment: A Retrospective on

    the Past Century and Some Thoughts About the Next’ (2003) 70 UChiLRev 9. cf FT Cullen and KEGilbert, Reaffirming Rehabilitation (2nd edn, Anderson 2013).

    68 ME Diamantis, ‘Clockwork Corporations: A Character Theory of Corporate Punishment’(2018) 103 IowaLRev 507.

    69 MA Lemley and B Casey, ‘Remedies for Robots’ (2019) 86 UChiLRev 1370.70 Hallevy (n 18) 210–11.

    AI and the Limits of Legal Personality 829

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • degrees of autonomy or unembodied AI systems operating in the cloud.71

    There is no principled reason for such a distinction, but it speaks to thetension within arguments for AI personality that blend instrumental andinherent justifications.Interestingly, arguments over the juridical personality of corporations tend to

    focus on the opposite problem: their dissimilarity to humans, pithily describedby the First Baron Thurlow as them having ‘no soul to be damned, and no bodyto be kicked’.72 The lack of a soul has not impeded juridical personality ofcorporations and poses no principled barrier to treating AI systems similarly.Corporate personality is different from AI personality, however, in that acorporation is made up of human beings, through whom it operates, whereasan AI system is made by humans.73

    Instrumental reasons could, therefore, justify according legal personalityto AI systems. But they do not require it. The implicit anthropomorphismelides further challenges such as defining the threshold of personality whenAI systems exist on a spectrum, as well as how personality might apply todistributed systems. It would be possible, then, to create legal personscomparable to corporations—each autonomous vehicle, smart medicaldevice, resume-screening algorithm, and so on could be incorporated.74 Ifthere are true liability gaps then it is possible that such legal forms could fillthem. Yet the more likely beneficiaries of such an arrangement would beproducers and users, who would thus be insulated from some or all liability.

    II. NATURAL PERSONALITY: COGITO, ERGO SUM?

    Instrumentalism is not the only reason legal systems recognise personality,however. In the case of natural persons, no Turing Test needs to be passed:the mere fact of being born entitles one to personhood before the law.75

    It was not always thus. Through much of human history, slaves were boughtand sold like property;76 indigenous peoples were compared to animals roamingthe land, justifying their dispossession;77 and for centuries under English law,Blackstone’s summary of the position of women held that ‘husband andwife are

    71 cf JMBalkin, ‘The Three Laws of Robotics in the Age of BigData’ (2017) 78OhioStLJ 1219.72 MAKing, Public Policy and the Corporation (Chapman and Hall 1977) 1. See eg JC Coffee,

    Jr, ‘“No Soul to Damn: No Body to Kick”: An Unscandalized Inquiry into the Problem of CorporatePunishment’ (1981) 79 MichLRev 386.

    73 SM Solaiman, ‘Legal Personality of Robots, Corporations, Idols and Chimpanzees: A Questfor Legitimacy’ (2017) 25 Artificial Intelligence and Law 174.

    74 See eg Chopra and White (n 41) 161.75 See eg Universal Declaration of Human Rights, GA Res 217A(III) (1948), UN Doc A/810

    (1948) art 1; International Covenant on Civil and Political Rights (ICCPR) (16 December 1966) 999UNTS 171, in force 23 March 1976, art 6(1).

    76 See J Allain (ed), The Legal Understanding of Slavery: From the Historical to theContemporary (Oxford University Press 2013).

    77 S Chesterman, ‘“Skeletal Legal Principles”: The Concept of Law in Australian Land RightsJurisprudence’ (1998) 40 Journal of Legal Pluralism and Unofficial Law 61.

    830 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • one person, and the husband is that person’.78 Even today, natural persons enjoyplenary rights and obligations only if they are adults, of sound mind, and notincarcerated.As indicated earlier, many of the arguments in favour of AI personality

    implicitly or explicitly assume that AI systems are approaching humanqualities in a manner that would entitle them to comparable recognitionbefore the law. Such arguments have been challenged both for theiranalysis and their implications. In terms of analysis, Neil Richards andWilliam Smart have termed the tendency to anthropomorphise AI systemsthe ‘android fallacy’.79 Experiment after experiment has shown that peopleare more likely to ascribe human qualities such as moral sensibility tomachines on the basis of their humanoid appearance, natural languagecommunication, or the mere fact of having been given a name.80 Moreserious arguments about AI approximating human qualities are challengeddue to unexamined assumptions about how those qualities manifest inhumans ourselves.81

    In terms of the implications, the 2017 European Parliament resolutionprompted hundreds of AI experts from across the continent to warn in anopen letter that legal personality for AI would be inappropriate from ‘anethical and a legal perspective’. Interestingly, such warnings may themselvesfall foul of the android fallacy by assuming that legal status based on thenatural person model necessarily brings with it all the ‘human’ rightsguaranteed under EU law.82 Other writers candidly admit that the only basisfor denying AI systems personality may ultimately be a form of speciesism—privileging human welfare over robot welfare because we the lawmakers arehuman.83 If AI systems become so sophisticated that this is our strongestdefence, the problem may not be their legal status but our own.84

    This section nonetheless takes seriously the idea that certain AI systemsmight have an entitlement to personality due to their inherent qualities. Thetechnical aspects of how those qualities might manifest—and indeed adetailed examination of the human qualities that they mimic—are beyond the

    78 L Holcombe, Wives and Property: Reform of the Married Women’s Property Law inNineteenth-Century England (Martin Robertson 1983) 18.

    79 NM Richards and WD Smart, ‘How Should the Law Think About Robots?’ in R Calo, AMFroomkin and I Kerr (eds), Robot Law (Edward Elgar 2016) 18–21.

    80 cf L Damiano and P Dumouchel, ‘Anthropomorphism in Human–Robot Co-evolution’(2018) 9 Frontiers in Psychology 468.

    81 See eg E Hildt, ‘Artificial Intelligence: Does Consciousness Matter?’ (2019) 10(1535)Frontiers in Psychology 1–3; G Meissner, ‘Artifcial Intelligence: Consciousness and Conscience’(2020) 35 AI & Society 231. For John Searle’s famous ‘Chinese room’ argument, see JR Searle,‘Minds, Brains, and Programs’ (1980) 3 Behavioral and Brain Sciences 417–24.

    82 Open Letter to the European Commission: Artificial Intelligence and Robotics (April 2018)para 2(b). cf Turner (n 40) 189–90.

    83 See eg Bryson, Diamantis and Grant (n 27) 283. cf P Singer, ‘Speciesism and Moral Status’(2009) 40 Metaphilosophy 567. 84 See below section III.

    AI and the Limits of Legal Personality 831

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • scope of this article.85 Instead, the focus will be on how and why naturalpersonhood might be extended. A first question to examine is how this hasbeen handled in the past, such as the enfranchisement and empowerment ofnatural persons long treated as inferior to white men. More recently, activistsand scholars have urged further expansion of certain rights to non-humananimals such as chimpanzees based on their own inherent qualities. Theinquiry then turns to the strongest articulation today of meaningful rights onbehalf of AI systems for inherent rather than instrumental reasons: that theyshould be able to own their creations.

    A. The Extension of Natural Personality

    The arc of the moral universe is long, as Dr Martin Luther King Jr famouslyintoned, but it bends towards justice. At the time that the United Statesdrafted its Declaration of Independence in 1776, the notion that ‘all men’[sic] were ‘created equal’ was demonstrably untrue. A decade later, theFrench Declaration on the Rights of Man similarly proclaimed natural andimprescriptible rights for all—‘nonsense upon stilts’ was Jeremy Bentham’sobservation.86 Man may well be born free, as Rousseau had opined in theopening lines of The Social Contract, but everywhere he remained in chains.87

    And yet the succeeding centuries did see a progressive realisation of thoselofty aspirations and the spread of rights. By the middle of the twentiethcentury, the Universal Declaration of Human Rights could claim that allhuman beings were ‘born free and equal in dignity and rights’, despite one-third of them living in territories that the UN itself classified as non-self-governing. Decolonisation, the end of apartheid, women’s liberation andother movements followed; rights remain a site of contestation, but virtuallyno State today would seriously contend that human adults are not personsbefore the law.88

    Interestingly, some arguments in favour of legal personality for AI draw noton this progressivist narrative of natural personhood but on the darker history ofslavery. Andrew Katz and Ugo Pagallo, for example, find analogies with theancient Roman law mechanism of peculium, whereby a slave lacked legalpersonality and yet could operate as more than a mere agent for his master.89

    85 See eg J-M Fellous and MA Arbib, Who Needs Emotions? The Brain Meets the Robot(Oxford University Press 2005); W Wallach and C Allen, Moral Machines: Teaching RobotsRight from Wrong (Oxford University Press 2009).

    86 J Bentham, ‘Anarchical Fallacies’ in J Bowring (ed), TheWorks of Jeremy Bentham (WilliamTait 1843) vol 2, 501.

    87 J-J Rousseau, The Social Contract (GDHCole trans, first published 1762, JM Dent 1923) 49.88 For limited exceptions concerning apostates and persons with disabilities, see PM Taylor, A

    Commentary on the International Covenant on Civil and Political Rights (Cambridge UniversityPress 2020) 449–54. For a discussion of anencephalic infants, see Kurki (n 15) 9.

    89 A Katz, ‘Intelligent Agents and Internet Commerce in Ancient Rome’ (2008) 20 Society forComputers and Law 35; U Pagallo, The Laws of Robots: Crimes, Contracts, and Torts (Springer

    832 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • As an example of a creative interpretation of personhood it is interesting, thoughit relies on instrumental justifications rather than the inherent qualities of slaves.As Pagallo notes, peculium was in effect a sort of ‘proto-limited liabilitycompany’.90 From the previous section, there is no bar on legal systemscreating such structures today—as for whether they should do so, relianceupon long discarded laws associated with slavery may not be the strongestcase possible.91

    An alternative approach is to consider how the legal system treats animals.92

    For the most part, they are regarded as property that can be bought and sold, butalso as deserving of ‘humane’ treatment.93 Liability of owners for damagecaused by animals has limited application to AI systems;94 here, the questionis whether those animals might ‘own’ themselves.Various efforts have sought to attribute degrees of personality to nonhuman

    animals, with little success. In 2013, for example, the Nonhuman Rights Projectfiled lawsuits on behalf of four captive chimpanzees, arguing that the animalsexhibited advanced cognitive abilities, autonomy, and self-awareness. Indenying writs of habeas corpus, the New York State Court AppellateDivision did not dispute these qualities but held that extension of rights suchas personality had traditionally been linked to the imposition of obligations inthe form of a social contract. Since, ‘needless to say’, the chimpanzees could notbear any legal duties, they could not enjoy rights of personality such as the rightto liberty.95 This was a curious basis on which to dismiss the case, as manyhumans who lack the capacity to exercise rights or responsibilities—infants,persons in a coma—are nonetheless deemed persons before the law.96 A

    2013) 103–6. See also H Ashrafian, ‘Artificial Intelligence and Robot Responsibilities: InnovatingBeyond Rights’ (2015) 21 Science and Engineering Ethics 325; S Nasarre-Aznar, ‘Ownership atStake (Once Again): Housing, Digital Contents, Animals, and Robots’ (2018) 10 Journal ofProperty, Planning, and Environmental Law 78; E Fosch-Villaronga, Robots, Healthcare, and theLaw: Regulating Automation in Personal Care (Routledge 2019) 152. In 2017, a digital bank calledPeculium was established in France—presumably for investors who never studied Latin.

    90 Pagallo (n 89) 104.91 M Chinen, Law and Autonomous Machines: The Co-Evolution of Legal Responsibility and

    Technology (Edward Elgar 2019) 19.92 See eg VAJ Kurki and T Pietrzykowski (eds), Legal Personhood: Animals, Artificial

    Intelligence and the Unborn (Springer 2017); S Stucki, ‘Towards a Theory of Legal AnimalRights: Simple and Fundamental Rights’ (2020) OJLS (forthcoming).

    93 cf K Sykes, ‘Human Drama, Animal Trials: What the Medieval Animal Trials Can Teach UsAbout Justice for Animals’ (2011) 17 Animal Law 273.

    94 In the case of an animal known to belong to a ‘dangerous species’, its keeper is presumed toknow that it has a tendency to cause harm andwill be held liable for damage that it causeswithout theneed to prove fault on the keeper’s part. For other animals, it must be shown that the keeper knew thespecific animal was dangerous. These common law rules on animals are now covered by legislation:Animals Act 1971 (UK). The English courts have shied away from a general doctrine of strictliability for ‘ultra-hazardous activities’. See C Witting, Street on Torts (15th edn, OxfordUniversity Press 2018) 453.

    95 People ex rel Nonhuman Rights Project, Inc v Lavery, 998 NYS 2d 248 (App Div, 2014).96 RS Abate, Climate Change and the Voiceless: Protecting Future Generations, Wildlife, and

    Natural Resources (Cambridge University Press 2019) 10–12; Kurki (n 15) 6–10 (distinguishingbetween passive and active legal personhood).

    AI and the Limits of Legal Personality 833

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • parallel case rejected that argument on the circular basis that it ‘ignores thefact that these are still human beings, members of the human community’.97

    Leave to appeal to the State Court of Appeals was denied, but one of thejudges issued a concurring opinion that ended on a speculative note about thefuture of such litigation. The issue, Judge Fahey observed, is profound and far-reaching. ‘Ultimately, we will not be able to ignore it. While it may be arguablethat a chimpanzee is not a “person”, there is no doubt that it is not merelya thing.’98

    Gabriel Hallevy has argued that animals are closer than AI systems to humanswhen one considers emotionality as opposed to rationality, but that this has notgenerally led to them being given personhood under the law. Instead, it is AIsystems’ rationality that provides the basis for personhood.99 That may betrue with regard to the ability to make out the mental element of a criminaloffence, but the fact that it is a crime to torture a chimpanzee but not acomputer also points to an important difference in how the legal systemvalues the two types of entity. In fact, a stronger argument may be made toprotect embodied AI systems that evoke emotional responses on the partof humans—regardless of the sophistication of their internal processing. Itseems probable that laws to protect such ‘social robots’ will at some point beadopted, comparable to animal abuse laws. As in the case of those laws,protection will likely be guided by social mores rather than consistentbiological—or technological—standards.100

    The assumption that natural legal personality is limited to human beings is soingrained in most legal systems that it is not even articulated.101 The failure toextend comparable rights even to our nearest evolutionary cousins bodes ill foradvocates of AI personality based on presumed inherent qualities.102

    B. Rewarding Creativity

    A distinct reason for considering whether AI systems should be recognised aspersons focuses not on what they are but what they can do. For the most part,this is framed as the question of whether an individual or corporation can claimownership of work done by an AI system. Implicit or explicit in suchdiscussions, however, is the understanding that if such work had been doneby a human then he or she would own it him- or herself.

    97 Nonhuman Rights Project, Inc ex rel Tommy v Lavery, 54 NYS 3d 392, 396 (App Div, 2017).98 Nonhuman Rights Project, Inc ex rel Tommy v Lavery, 100 NE 3d 846, 848 (NY, 2018).99 Hallevy (n 18) 28.

    100 K Darling, ‘Extending Legal Protection to Social Robots: The Effects of Anthropomorphism,Empathy, and Violent Behavior Towards Robotic Objects’ in R Calo, AM Froomkin and I Kerr(eds), Robot Law (Edward Elgar 2016) 226–9. cf R Calo, ‘Robotics and the Lessons ofCyberlaw’ (2015) 103 CLR 532. 101 Kurki (n 15) 8.

    102 cf ibid 176–8 (discussing whether AI could be ‘ultimately valuable’ and thus entitled topersonhood).

    834 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • There is, in fact, a long history of questioningwhethermachine-assisted creationis protectable through copyright.103 Early photographs, for example, were notprotected because the mere capturing of light through the lens of a cameraobscura was not regarded as true authorship.104 It took an iconic picture ofOscar Wilde going all the way to the US Supreme Court before copyright wasrecognised in mechanically produced creations.105 The challenge today isdistinct: not whether a photographer can ‘own’ the image passively captured bya machine, but who might own new works actively created by one. A computerprogram like a word processor does not own the text typed on it, any more than apen owns the words that it writes. But AI systems now write news reports,compose songs, paint pictures—these activities generate value, but can andshould they attract the protections of copyright law?In most jurisdictions, the answer is no.The US Copyright Office, for example, has stated that legislative protection

    of ‘original works of authorship’106 is limited to works ‘created by a humanbeing’. It will not register works ‘produced by a machine or mere mechanicalprocess that operates randomly or automatically without any creative input orintervention from a human author.’107 The word ‘any’ is key and begs thequestion of what level of human involvement is required to assert authorship.108

    Consider the world’s most famous selfie—of a black crested macaque. DavidSlater went to Indonesia to photograph the endangered monkeys, which weretoo nervous to let him take close-ups. So he set up a camera that enabledthem to snap their own photos.109 After the images gained significantpublicity, animal rights activists argued that the monkeys had a greater claimto authorship of the photographs than the owner of the camera. Slater dideventually win, reflecting existing law110—though as part of a settlement heagreed to donate 25 per cent of future royalties from the images to groupsprotecting crested macaques.111 As computers generate more content

    103 J Grimmelmann, ‘There’s No Such Thing as a Computer-Authored Work— and It’s a GoodThing, Too’ (2016) 39 Columbia Journal of Law & the Arts 403.

    104 M de Cock Buning, ‘Artificial Intelligence and the Creative Industry: NewChallenges for theEU Paradigm for Art and Technology by Autonomous Creation’ in W Barfield and U Pagallo (eds),Research Handbook on the Law of Artificial Intelligence (Edward Elgar 2018) 524.

    105 Burrow-Giles Lithographic Co v Sarony, 111 US 53 (1884). Arguments continued, however,with Germany withholding full copyright of photographs until 1965. A Nordemann, ‘Germany’ inY Gendreau, A Nordemann and R Oesch (eds), Copyright and Photographs. An InternationalSurvey (Kluwer 1999) 135. 106 17 USC §102(a).

    107 Compendium of US Copyright Office Practices (3rd edn, US Copyright Office 2019) section313.2 (emphasis added).

    108 See DJ Gervais, ‘The Machine as Author’ (2020) 105 IowaLRev (forthcoming) (proposing atest of ‘originality causation’).

    109 C Cheesman, ‘Photographer Goes Ape over Monkey Selfie: Who Owns the Copyright?’Amateur Photographer (7 August 2014).

    110 Naruto v Slater, 888 F 3d 418 (9th Cir, 2018). The Court held that Naruto lacked standing tosue under the US Copyright Act and had no claim to the photographs Slater had published.

    111 M Haag, ‘Who Owns a Monkey Selfie? Settlement Should Leave Him Smiling’ New YorkTimes (11 September 2017).

    AI and the Limits of Legal Personality 835

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • independently of their human programmers, it is going to be harder and harderfor humans to take credit. Instead of training a monkey how to press a button, itmay bemore like a teacher trying to take credit for the work of his or her student.Turning to the normative question of whether AI systems themselves should

    have a claim to ownership, the policy behind copyright is often articulatedas incentivising innovation. This has long been seen as unnecessary orinappropriate for computers. ‘All it takes,’ Pamela Samuelson wrote in 1986,‘is electricity (or some other motive force) to get the machines intoproduction’.112 Here the Turing Test offers a different kind of thoughtexperiment: the more machines are designed to copy human traits, the moreimportant such incentives might become.Until recently, China followed the orthodoxy that AI-produced work is not

    entitled to copyright protection.113 In December 2019, however, a districtcourt in China held that an article produced by an algorithm could not becopied without permission. The article in question was a financial reportpublished by Tencent with a note that it was ‘automatically written’ byDreamwriter, a news writing program developed by the company in 2015.Shanghai Yingxun Technology Company copied the article withoutpermission and Tencent sued. The article had been taken down, but theinfringing company was ordered to pay ¥1,500 (US$216) for ‘economiclosses and rights protection’.114

    The Chinese case reflects a distinct reason for recognising copyright, which isthe protection of upfront investment in creative processes. This accountpresumes that, in the absence of such protection, investment will dry up andthere will be a reduced supply of creative works.115 Such an approach tocopyright is broadly consistent with common law doctrines concerning workcreated in the course of employment, known in the United States as work forhire, under which a corporate employer or an individual who commissions awork owns copyright despite the actual ‘author’ being someone else.116 Thismay not be available in civil law jurisdictions that place a greater emphasison the moral rights of a human author.117

    112 P Samuelson, ‘Allocating Ownership Rights in Computer-Generated Works’ (1986) 47UPittLRev 1199.

    113 Beijing Feilin Law Firm v Baidu Corporation (No. 239) (25 April 2019) (Beijing InternetCourt); Chen Ming, ‘Beijing Internet Court Denies Copyright to Works Created Solely byArtificial Intelligence’ (2019) 14 Journal of Intellectual Property Law & Practice 593.

    114 深圳市腾讯计算机系统有限公司 [Shenzhen Tencent Computer System Co Ltd] v 上海盈讯科技有限公司 [Shanghai Yingxun Technology Co Ltd] (24 December 2019) (ShenzhenNanshan District People’s Court); Zhang Yangfei, ‘Court Rules AI-Written Article HasCopyright’ China Daily (9 January 2020).

    115 K Raustiala and CJ Sprigman, ‘The Second Digital Disruption: Streaming and the Dawn ofData-Driven Creativity’ (2019) 94 NYULRev 1603–4.

    116 cf Asia Pacific Publishing Pte Ltd v Pioneers & Leaders (Publishers) Pte Ltd [2011] 4 SLR381, 398–402 (Singapore Court of Appeal) (distinguishing between authorship and ownership).

    117 D Lim, ‘AI & IP: Innovation & Creativity in an Age of Accelerated Change’ (2018) 52AkronLRev 843–6.

    836 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • In Britain, legislation adopted in 1988 does in fact provide copyrightprotection for ‘computer-generated’ work, the ‘author’ of which is deemedto be the person who undertook ‘the arrangements necessary for the creationof the work’.118 Similar legislation has been adopted in New Zealand,119

    India,120 Hong Kong,121 and Ireland.122 Though disputes about who took the‘arrangements necessary’ may arise, ownership by a recognised legal personor by no one at all remain the only possible outcomes.123

    The European Parliament in April 2020 issued a draft report arguing thatAI-generated works could be regarded as ‘equivalent’ to intellectual worksand therefore protected by copyright. It opposed giving personality of anykind to the AI itself, however, proposing that ownership instead vest in ‘theperson who prepares and publishes a work lawfully, provided that thetechnology designer has not expressly reserved the right to use the work inthat way’.124 The ‘equivalence’ to intellectual work is interesting, justifiedhere on the basis of a proposed shift in recognising works based on a‘creative result’ rather than a creative process.125

    For the time being, then, copyright cannot be owned by AI systems—anddoes not need to be in order to recognise the creativity of those systems.Nevertheless, reservations as to ownership being claimed by anyone else areevident in the limited rights given for ‘computer-generated’ works. Theduration is generally for a shorter period, and the deemed ‘author’ is unableto assert moral rights—such as the right to be identified as the author of thework.126 A World Intellectual Property Organization (WIPO) issues paperrecognised the dilemma, noting that excluding such works would favour ‘thedignity of human creativity over machine creativity’ at the expense ofmaking the largest number of creative works available to consumers. Amiddle path, it observed, might be to offer ‘a reduced term of protection andother limitations’.127

    118 Copyright, Designs and Patents Act 1988 (UK), section 9(3). ‘Computer-generated’ isdefined in section 178 as meaning that the work was ‘generated by computer in circumstancessuch that there is no human author of the work’.

    119 Copyright Act 1994 (NZ), section 5(2)(a).120 Copyright Amendment Act 1994 (India), section 2.121 Copyright Ordinance 1997 (HK), section 11(3).122 Copyright and Related Rights Act 2000 (Ireland), section 21(f).123 See eg Nova Productions v Mazooma Games [2007] EWCA Civ 219; A Brown et al,

    Contemporary Intellectual Property: Law and Policy (5th edn, Oxford University Press 2019)100–1.

    124 S Séjourné, Draft Report on Intellectual Property Rights for the Development of ArtificialIntelligence Technologies (European Parliament, Committee on Legal Affairs, 2020/2015(INI),24 April 2020) paras 9–10. 125 ibid, Explanatory Statement.

    126 Copyright, Designs and Patents Act, section 12(7) (protection for such works is limited to 50years, rather than 70 years after the death of the author), section 79 (exception to moral rights).

    127 Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence (WorldIntellectual Property Organisation, WIPO/IP/AI/2/GE/20/1 REV, 21 May 2020) para 23. See alsoM du Sautoy, The Creativity Code: Art and Innovation in the Age of AI (Harvard University Press

    AI and the Limits of Legal Personality 837

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • C. Protecting Inventors

    Whereas in copyright law the debate is over who owns works produced by AIsystems, in patent law the question is whether they can be owned at all. Patentlaw in most jurisdictions provides or assumes that an ‘inventor’must be human.In July 2019, Stephen Thaler decided to test those assumptions, filing patents inBritain, the European Union, and the United States that listed an AI system,DABUS, as the ‘inventor’.128 The British Intellectual Property Office waswilling to accept that DABUS created the inventions, but relevant legislationrequired that an inventor be a natural person and not a machine.129 TheEuropean Patent Office (EPO) followed a more circuitous route to the sameend, rejecting the applications on the basis that designating a machine as theinventor did not meet the ‘formal requirements’. These included statingthe ‘family name, given names and full address of the inventor’.130 A name,the EPO observed, does not only identify a person: it enables them toexercise their rights and forms part of their personality. ‘Things’, by contrast,‘have no rights which a name would allow them to exercise.’131

    The US application was also rejected, based in part on the fact that relevantstatutes repeatedly referred to inventors using ‘pronouns specific to naturalpersons’ such as ‘himself’ and ‘herself’. The Patent and Trade Office(USPTO) cited cases holding that conception—‘the touchstone ofinventorship’—is a ‘mental act’ that takes place in ‘the mind of the inventor’.Those cases concluded that invention in this sense is limited to natural personsand not corporations. The USPTO concluded that an application listing an AI asan ‘inventor’ was therefore incomplete, but was careful to avoid making anydetermination concerning ‘who or what’ actually created the inventions inquestion.132

    These decisions were consistent with case law and the practice of patentoffices around the world, none of which—yet—allows for an AI system to berecognised as an inventor. Analogous to copyright law, one purpose ofthe patent system is to encourage innovation by granting a time-limited

    2019) 102; R Abbott, The Reasonable Robot: Artificial Intelligence and the Law (CambridgeUniversity Press 2020) 71–91.

    128 The patents were for a ‘food container’ and ‘devices and methods for attracting enhancedattention’. DABUS is an acronym for Device for the Autonomous Bootstrapping of UnifiedSentience.

    129 Patents Act 1977 (UK), sections 7, 13. See Whether the Requirements of Section 7 and 13Concerning the Naming of Inventor and the Right to Apply for a Patent Have Been Satisfied inRespect of GB1816909.4 and GB1818161.0 (BL O/741/19) (4 December 2019) (UK IntellectualProperty Office) paras 14–20. The tribunal went on to observe that Thaler could not haveacquired ownership from DABUS ‘as the inventor cannot itself hold property’ (para 23).

    130 European Patent Convention, done at Munich, 5 October 1973, in force 7 October 1977, art81, rule 19(1).

    131 Grounds for the EPO Decision on Application EP 18 275 163 (27 January 2020) (EuropeanPatent Office) para 22.

    132 In re Application No.: 16/524,350 (Decision on Petition) (22 April 2020) (US Patent andTrademark Office).

    838 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • monopoly in exchange for public disclosure. As even the creators of DABUSacknowledged, an AI system is unlikely to be motivated to innovate by theprospect of patent protection. Any such motivation would be found in itsprogramming: it must be instructed to innovate.133

    As for whether a human ‘inventor’ could be credited for work done by such asystem, there is no equivalent of the work for hire doctrine. To be an inventor,the human must have actually conceived of the invention.134 Joint inventionsare possible and contributions do not need to be identical, but in the absenceof a natural person making a significant conceptual contribution an inventionwould be, on current law, ineligible for patent protection.135

    Among the interesting aspects of these recent developments are the means bywhich the same conclusion was reached. As in the case of copyright, thereappeared to be no serious doubt that an AI system was capable of creatingthings that were patentable inventions. The British IP Office explicitlyaccepted that DABUS had done just that;136 the USPTO was at pains toavoid making such a conclusion explicit. The EPO, for its part, dodged theissue by holding that, because they have no legal personality, ‘AI systemsand machines cannot have rights that come from being an inventor’.137 TheEPO was the most blatant, but all three decisions relied on formalism—language in the relevant statute that provided or implied that the rights inquestion were limited to natural persons. The USPTO diligently dusted off acopy of Merriam-Webster’s Collegiate Dictionary to conclude that the use of‘“whoever” suggests a natural person.’138

    Legal tribunals routinely face choices between substantive justice andprocedural regularity. Statutes of limitations are intended to provide certaintyin legal relations; the courts of equity emerged to temper that certainty withjustice. In both copyright and patent law, continuing to privilege humancreativity over its machine equivalent may ultimately need to be justified by

    133 Whether the Requirements of Section 7 and 13 Concerning the Naming of Inventor and theRight to Apply for a Patent Have Been Satisfied in Respect of GB1816909.4 and GB1818161.0 (n129) para 28.

    134 Manual of Patent Examining Procedure (MPEP) (9th edn, US Patent and Trademark Office2017) section 2137.01.

    135 JA Cubert and RGA Bone, ‘The Law of Intellectual Property Created by ArtificialIntelligence’ in W Barfield and U Pagallo (eds), Research Handbook on the Law of ArtificialIntelligence (Edward Elgar 2018) 418. There is a tenuous argument that scope for interpretationmay lie in the fact that ‘inventor’ is defined in US law as the person who ‘invents or discovers’the subject matter of the invention. 35 USC section 101 (emphasis added). See eg R Abbott, ‘IThink, Therefore I Invent: Creative Computers and the Future of Patent Law’ (2016) 57 BCLRev1098; WM Schuster, ‘Artificial Intelligence and Patent Ownership’ (2018) 75 Wash&LeeLRev1977; K Hartung, ‘Dear USPTO: Patents for Inventions by AI Must Be Allowed’ IP Watchdog(21 May 2020).

    136 Whether the Requirements of Section 7 and 13 Concerning the Naming of Inventor and theRight to Apply for a Patent Have Been Satisfied in Respect of GB1816909.4 and GB1818161.0 (n129) para 15.

    137 Grounds for the EPO Decision on Application EP 18 275 163 (n 131) para 27.138 In re Application No.: 16/524,350 (n 132) 4.

    AI and the Limits of Legal Personality 839

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • the kind of speciesism mentioned earlier.139 At times, however, the languageused to engage in such rationalisations of the status quo echoes older legalforms that kept property relations in their rightful place. Among its reasonsdenying that AI systems like DABUS could hold or transfer rights to apatent, for example, the EPO dismissed analogies between machines andemployees: ‘Rather than being employed,’ the EPO concluded, ‘they areowned.’140

    Such statements are accurate for the time being. But if the boosters of generalAI are correct and some form of sentience is achieved, the more appropriateanalogy between legal personality and slavery may not be the limitedeconomic rights slaves held in ancient Rome. Rather, it may be theconstraints imposed on AI systems today.

    III. CONSTRAINING SUPERINTELLIGENCE

    If AI systems do eventually match human intelligence it seems unlikelythat they would stop there. The prospect of AI surpassing human capabilitieshas long dominated a popular sub-genre of science fiction.141 Thoughmost serious researchers do not presently see a pathway even to general AIin the near future, there is a rich history of science fiction presaging realworld scientific innovation.142 Taking Nick Bostrom’s definition ofsuperintelligence as an intellect that greatly exceeds human cognitiveperformance in virtually all relevant domains,143 it is at least conceivable thatsuch an entity could be created within the next century.144

    The risks associated with that development are hard to quantify.145 Though amalevolent superintelligence bent on extermination or enslavement of thehuman race is the most dramatic scenario, more plausible ones include a

    139 See above (n 83).140 Grounds for the EPO Decision on Application EP 18 275 163 (n 131) para 31.141 See eg H Harrison, War with the Robots (Grafton 1962); PK Dick, Do Androids Dream of

    Electric Sheep? (Doubleday 1968); AC Clarke, 2001: A Space Odyssey (Hutchinson 1968).142 P Jordan et al, ‘Exploring the Referral and Usage of Science Fiction in HCI Literature’ (2018)

    arXiv 1803.08395v2.143 N Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press 2014)

    22. Early speculation on superintelligence is typically traced to IJ Good, ‘SpeculationsConcerning the First Ultraintelligent Machine’ in FL Alt and M Rubinoff (eds), Advances inComputers (Academic 1965) vol 6, 31. Turing himself raised the possibility in a talk in 1951,later published as AM Turing, ‘Intelligent Machinery, A Heretical Theory’ (1996) 4 PhilosophiaMathematica 259–60; he in turn credited a yet earlier source — Samuel Butler’s 1872 novelErewhon.

    144 See eg O Etzioni, ‘No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity’MIT Technology Review (20 September 2016). He cites a survey of 80 fellows of the AmericanAssociation for Artificial Intelligence (AAAI) on their views of when they thoughtsuperintelligence (as defined by Bostrom) would be achieved. None said in the next 10 years, 7.5per cent said in 10–25 years; 67.5 per cent said in more than 25 years; 25 per cent said it would neverbe achieved.

    145 P Bradley, ‘Risk Management Standards and the Active Management of Malicious Intent inArtificial Superintelligence’ (2019) 35 AI & Society 319; A Turchin and D Denkenberger,

    840 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • misalignment of values, such that the ends desired by the superintelligenceconflict with those of humanity, or a desire for self-preservation, which couldlead such an entity to prevent humans from being able to switch it off orotherwise impair its ability to function. An emerging literature examinesthese questions of what final and instrumental goals a superintelligence mighthave,146 though the discourse was long dominated by voices far removed fromtraditional academia.147 Visa Kurki’s recent book-length discussion of legalpersonality, for example, includes a chapter specifically on personality of AIthat concludes with the statement: ‘Of course, if some AIs ever becomesentient, many of the questions addressed in this chapter will have to bereconsidered.’148

    In the face of many unknown unknowns, two broad strategies havebeen proposed to mitigate the risk. The first is to ensure any such entity canbe controlled, either by limiting its capacities to interact with the world orensuring our ability to contain it, including a means of stopping itfunctioning: a kill switch.149 Assuming that the system has some kindof purpose, however, that purpose would most likely be best served bycontinuing to function. In a now classic thought experiment, asuperintelligence tasked with making paperclips could take its instructionsliterally and prioritise that above all else. Humans who might decide to turn itoff would need to be eliminated, their atoms deployed to making ever morepaperclips.150

    Arguments that no true superintelligence would do anything quite so daft relyon common sense and anthropomorphism, neither of which should be presumedto be part of its code. A true superintelligence would, moreover, have the abilityto predict and avoid human interventions or deceive us into not making them.151

    It is entirely possible that efforts focused on controlling such an entity may bringabout the catastrophe that they are intended to prevent.152

    For this reason, manywriters prioritise the second strategy, which is to ensurethat any superintelligence is aligned with our own values—emphasising notwhat it could do, but what it might want to do. This question has alsofascinated science fiction writers, most prominently Isaac Asimov, whose

    ‘Classification of Global Catastrophic Risks Connected with Artificial Intelligence’ (2020) 35 AI &Society 147.

    146 See eg O Häggström, ‘Challenges to the Omohundro–Bostrom Framework for AIMotivations’ (2019) 21 Foresight 153.

    147 See DJ Chalmers, ‘The Singularity: A Philosophical Analysis’ (2010) 17(9–10) Journal ofConsciousness Studies 7. 148 Kurki (n 15) 189.

    149 See generally Bostrom (n 143) 127–44.150 N Bostrom, ‘Ethical Issues in AdvancedArtificial Intelligence’ in I Smit andGELasker (eds),

    Cognitive, Emotive and Ethical Aspects of DecisionMaking in Humans and in Artificial Intelligence(2003) vol 2, 12.

    151 J Danaher, ‘Why AI Doomsayers Are Like Sceptical Theists and Why It Matters’ (2015) 25Mind and Machines 231.

    152 W Totschnig, ‘The Problem of Superintelligence: Political, not Technological’ (2019) 34 AI& Society 907.

    AI and the Limits of Legal Personality 841

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 09 Jun 2021 at 06:24:50, subject to the Cambridge Core terms of

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/S0020589320000366https://www.cambridge.org/core

  • three laws of robotics are a leitmotif in the literature.153 Here the narrower focusis on whether granting AI systems legal personality in the near term might serveas a hedge against the risks of superintelligence emerging in the future.This is, in effect, another instrumental reason for granting personality. Of

    course, there is no reason to assume that including AI systems within humansocial structures and treating them ‘well’ would necessarily lead to themreciprocating the favour should they assume dominance.154 Nevertheless,presuming rationality on the part of a general AI, various authors haveproposed approaches that amount to socialising AI systems to humanbehaviour.155 To avoid the sorcerer’s apprentice problem of a machinesimply told to ‘make paperclips’, for example, its goals could be tied tohuman preferences and experiences. This might be done by embedding thosevalues within the code of such systems prior to them achievingsuperintelligence. Goals would thereby be articulated not as mereoptimisation—the number of paperclips produced, for example—but asfuzzier objectives such as maximising the realisation of humanpreferences,156 or inculcating a moral framework and a reflective equilibriumthat would match the progressive development of human morality itself.157

    To a lawyer, that sounds a lot like embedding these new entities within a legalsystem.158 If one of the functions of a legal system is the moral education of itssubjects, it is conceivable that including AI in this way could contribute to areflective equilibrium that might encourage an eventual superintelligence toembrace values compatible with our own.For the present, this is proposed more as a thought experiment than a policy

    prescription. If a realistic path to superintelligence emerges, it might become amore urgent concern.159 There is no guarantee that the approach would beeffective, but some small comfort might be taken from the fact that, for themost part, the categories of legal persons recognised in most jurisdictions,

    153 I Asimov, ‘Runaround’ Astounding Science Fiction (March 1942).154 Turner (n 40) 164.155 R Kurzweil, The Singularity Is Near: When Humans Transcend Biology (Viking 2005) 424

    (‘Our primary strategy in this area should be to optimise the likelihood that future nonbiologicalintelligence will reflect our values of liberty, tolerance, and respect for knowledge and diversity.The best way to accomplish this is to foster those values in our society today and goingforward.’); N Soares and B Fallenstein, ‘Agent Foundations for Aligning Machine Intelligencewith Human Interests: A Technical Research Agenda’ in V Callaghan et al. (eds), TheTechnological Singularity: Managing the Journey (Springer 2017) 117–20.

    156 SJ Russell, Human Compatible: Artificial Intelligence and the Problem of Control (Viking2019). cf Bostrom’s suggestion that the goal for a superintelligence might be expressed as ‘achievethat which we would have wished the AI to achieve if we had thought about the matter long andhard’. Bostrom (n 143) 141.

    157 E Yudkowsky, ‘Complex Value Systems in Friendly AI’ in J Schmidhuber, KR Thórissonand M Looks (eds), Artificial General Intelligence (Springer 2011) 388.

    158 cf S Omohundro, ‘Autonomous Technology and the Greater HumanGood’ (2014) 26 Journalof Experimental & Theoretical Artificial Intelligence 308.

    159 In the event that that path lies through augmentation of humans rather than purely artificialentities, those humans are likely to remain subjects of the law.

    842 International and Comparative Law Quarterly

    use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S0020589320000366D