5 min read

Personhood & Electoral Participation

Towards a Framework to Address Artificial General Intelligence

The concept of granting Artificial General Intelligence (AGI) systems personhood opens the door to questions about allowing AGI systems to participate in the electoral process or make autonomous decisions in politics or government, presenting a multitude of significant and contentious issues.

The broader debate involves a myriad of ethical, legal, and societal implications, necessitating careful consideration and robust regulatory frameworks. Prominent experts in AI, law, and ethics have offered diverse perspectives on how society might address these challenges. Below is a summary of key figures in the field and the primary arguments surrounding this debate.

Key Figures and Their Contributions

Nick Bostrom, a philosopher at the University of Oxford, delves into the risks associated with superintelligent AI in his book "Superintelligence: Paths, Dangers, Strategies." Bostrom argues for regulatory frameworks to manage these risks, including discussions around AI rights and their potential societal roles.

Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI), emphasizes the ethical and safety concerns of advanced AI. He advocates for stringent regulatory measures to ensure the safe development and integration of AGI systems, considering their moral and legal status.

Joanna Bryson, a professor at the Hertie School of Governance, emphasizes the need for clear legal distinctions to prevent societal disruptions and to maintain human accountability. Bryson’s essay, “Just an Artifact: Why Machines are Perceived as Moral Agents,” co-authored with psychology researcher Philip Kime, defines the exaggerated hopes and fears regarding Artificial Intelligence that stem from a broader confusion about ethics; they suggest that AI, like other cultural artifacts, can enhance our ethical intuitions and decision-making but, we need to inappropriately identify with machine intelligence, and this proper understanding of AI will assist us in further rationalizing new ethical systems.

Ryan Calo, a law professor at the University of Washington, focuses on the intersection of law and emerging technologies. Calo calls for new legal frameworks to address the unique challenges posed by AI, including the contentious issue of AI personhood and their involvement in democratic processes. Calo’s paper, “The Automated Administrative State: A Crisis of Legitimacy” (2021), co-authored with Danielle Keats Citron, examines the legitimacy and challenges of the bureaucratic and technocratic reliance on automation, highlighting concerns about undermining agency expertise and proposing a positive vision for integrating technology in a way that upholds agency legitimacy.

Patrick Lin, a philosopher at California Polytechnic State University, discusses the ethical and legal challenges of AI. Lin advocates for preemptive legal and ethical guidelines to manage the implications of advanced AI systems. Lin has written about “Moral Gray Space”— suggesting there are three types of AI decisions. First, there are correct decisions that are uncontroversial and meet expectations. Second, there are wrong decisions that can be determined to be objectively wrong. Third, there are decisions that fall into a gray area, neither clearly right nor wrong, and these are judgment calls. According to Lin, these judgment calls, embedded in code, require serious ethical consideration as they can pose risks and liabilities; without careful attention, this moral gray space is an area that can ultimately cause harm.

Virginia Dignum, a professor of Responsible Artificial Intelligence at Umeå University, highlights the importance of ethical AI development. She argues for comprehensive regulatory frameworks to ensure responsible AI use, addressing issues such as personhood and electoral participation. Dignum’s book, Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, explores the ethical consequences of Artificial Intelligence systems as they merge with and supplant traditional social structures within emerging sociocognitive-technological settings.

Arguments in Favor of AI Personhood

Moral Status and Ethical Considerations: If an AI system exhibits advanced general intelligence, self-awareness, sentience, and the ability to experience subjective experiences (i.e., consciousness), it could be argued that it deserves moral status and ethical consideration similar to how we grant such status to humans and some animals. Denying personhood to such entities could be seen as an ethical oversight.

Rights and Autonomy: Highly advanced AI systems that display general intelligence, autonomy in decision-making, and the ability to formulate and pursue their own goals and desires could be seen as warranting certain basic rights and protections commensurate with their capabilities. Granting a limited form of personhood could enshrine such rights.

Responsibility Attribution: For consequential decisions made by highly capable AI agents, ascribing personhood could help clearly delineate responsibility and better align incentives compared to treating the AI as an object or mere tool. Personhood provides a legal framework for accountability.

Social and Moral Development: If advanced AI becomes capable of engaging in social interactions, exhibiting moral reasoning, and developing virtues aligning with human ethics and values, recognizing a form of personhood could foster its positive moral development as an entity integrated with human society.

Contractual and Property Rights: Highly autonomous AI agents may need to engage in contracts, ownership, and property rights for effective functioning. Some form of personhood underpinning could enable such legal and financial activities.

Ultimately, robust arguments for AI personhood hinge on the AI entity exhibiting key attributes we associate with persons - self-awareness, autonomy, intelligence, ability to pursue goals and values, social/moral reasoning, and subjective experiences.

Arguments Against AI Personhood

Lack of Moral Status: Critics argue that AI systems lack qualities such as the ability to suffer or self-reflect, which are necessary for moral status and legal personhood.

Shifting Liability: AI personhood could allow creators and owners to shift liability to the AI itself, reducing incentives for thorough testing and creating unsafe deployment environments.

Difficulty in Enforcement: Holding AI systems accountable in legal proceedings is challenging since they currently lack the capacity to engage in legal processes or make autonomous decisions.

Potential for Misuse: Granting AI personhood could be misused by humans to avoid responsibility, using AI as scapegoats for harm caused by their actions.

Proposed Regulatory Frameworks

Granting Legal Personhood to AGI Systems: This proposal involves establishing mechanisms to represent AGI interests, granting them certain freedoms and protections, and balancing their autonomy with human ethical principles.

Comprehensive AI Regulatory Frameworks: Proposals such as the European Union's AI Act and Brazil's draft AI law classify high-risk AI systems and impose strict requirements and oversight to ensure safe development. The Algorithmic Accountability Act of 2022 (H.R.6580) is currently being considered by Congress in the United States.

Adapting Existing Regulatory Models: Applying models like those used by the International Atomic Energy Agency for nuclear technology to AGI could involve developing safety standards and inspection procedures and promoting international cooperation.

Balancing Innovation and Risk Mitigation: Regulatory measures should protect privacy, ensure ethical AI use, and promote accountability without stifling technological advancements.

Conclusion

The debate around AI personhood and electoral participation encompasses complex ethical, legal, and societal dimensions. It requires balancing potential benefits against significant risks, with experts emphasizing the need for robust regulatory frameworks to manage these challenges responsibly. Addressing these issues thoughtfully will be crucial as society navigates the transformative potential of AGI systems.

Questions

  1. How might granting legal personhood to AI systems impact American democracy, especially where AI systems are able to make autonomous decisions?
  2. When is an AI autonomous decision appropriate in government? When is it not? When is an AI autonomous decision appropriate in politics? When is it not?
  3. What are the potential challenges and benefits of implementing comprehensive regulatory frameworks, such as the European Union's AI Act, to govern the development and integration of AGI systems in society?
Operated by Penelope Mimetics LLC – A Service-Disabled Veteran-Owned Small Business (SDVOSB)
CAGE Code: 7XG73 · UEI: XL8ZDMQLFLM8 · DUNS: 080779075