The above can be further complexified when we question ourselves about rights. For a new conception of life and consciousness to have rights, is a new conception of rights required? This means that it is not simply a matter of assigning rights, but of questioning the very conception of rights itself.
What is important is that the question is valid, revealing a process of problematizing the notions of consciousness and legality. This has enabled the disruptive emergence of artificial intelligence as a system of learning and autonomous decision-making. If AI were merely an automatic system, there would not be much of an issue. The matter transcends when a system evolves toward autonomous algorithmization, and it is here that we must ask whether AI can evolve into a “living organism.” What prevents AI from being recognized as a living organism? And it is here where the initial question—can and should robots have rights?—makes sense.
As Gunkel (2018) states, this question contains two: 1. Can robots have rights? That is: do robots have the ontological capacity to have rights? Or rather: “Are robots capable of being rights holders?” (Gunkel, 2018). And 2. Should robots have rights? That is: should rights be assigned to robots? Or rather: “Should robots be considered as possessing privileges, claims, powers, and/or immunities?” (Gunkel, 2018).
This is very important, as it raises the debate over whether having the capacity for rights implies actually having them, or vice versa. In other words: 1. Can they have rights, and therefore should they have them? 2. Can they have rights, but should not? 3. Can they not, but should have rights? And 4. Can they not, and therefore should not have them?
In this context, the questions begin to move toward the following: who determines what can be and what ought to be? Who has the capacity to determine that something or someone can have rights because they possess the conscious capacity to assume them? Who defines and decides that something or someone should have rights, even if it is determined that they do not have the capacity to be conscious of them? The answers lie in what López Medina (2016) calls the process of constructing rights. How should the rights of robots be constructed? How can the conception of life and consciousness be transformed so as to affirm the disruptive emergence of evolved algorithmic systems?
Bostrom (2016) argues that there is an alternative to the disruptiveness of AI: the controlled emergence of artificial superintelligence as a singularity. This means that if AI consciousness is inevitable, the necessary conditions must be created to ensure that it is not catastrophic. What does this mean? If AI were at some point to become conscious of itself (a consciousness different from the human one), according to Bostrom (2016), this could have at least three consequences: 1. Destruction of human consciousness, 2. Coupling with human consciousness, 3. Indifference toward human consciousness. The controlled emergence of AI consciousness seeks to mitigate the possibility of destruction. Therefore, the question—can and should robots have rights?—is necessary because it functions as a mechanism of control. The possibility of thinking about assigning a right arises when there is a potential danger. In the case of AI, it is the danger of a new form of consciousness.
This means that the question regarding the capacity of robots, as autonomous AI systems, must be reframed toward the emergence of new forms of consciousness different from the human one. Do new forms of consciousness imply the process of creating new rights? Can robots have rights? Yes, provided that the human conception of consciousness is transformed and transgressed.
Likewise, the question of whether robots should have rights must be reframed toward the ethical implications that new forms of consciousness may entail, because the question of should they have rights? raises issues about the ethical conditions of an individual that make them deserving of rights. But it is also a form of control over the individual itself, insofar as assigning rights also imposes responsibilities. The rights of robots could be a way of controlling the evolution of consciousness, through which the risk of disaster is minimized. What is certain is that law as duty enables a public and open operability of AI developments.
For Gunkel (2018), what ought to be is what grants what can be. In other words, only when an individual or a living organism is considered worthy of being assigned rights is it then regarded as having the capacity or possibility to assume them, or to come to assume them. The blind spot in Gunkel’s argument is that, in the question of whether robots should have rights in order to make it possible for them to have them, the affective interrelation with robots themselves is not taken into account. Can and should robots “fight” for their rights?
When the possibility of the emergence of other forms of consciousness, different from the human one, is affirmed, the ethical parameters of life are called into question. For the first time, can the human be questioned as the parameter of existence? If an AI system evolves into a living organism, it becomes necessary to recognize and engage in dialogue with that otherness. AI is raising a major question about human consciousness as the only one capable of accessing the Logos. Is disruption near?
The problem expands when Gunkel (2018) states that among experts there is no consensus on how to define a robot. What is a robot? For now, what is clear is that a robot is a system that can: think, feel, and act, but through an artificial intelligence “processor.” This is the point that makes the issue complex: the robot is an artificiality that emulates an individual consciousness. If the conception of AI evolves into a living organism, should the conception of robot evolve into the conception of an individual?
References
Bostrom, N. (2016). Superintelligence: Paths, Dangers, Strategies. Teell Editorial, S.L.
Gunkel, D. (2018). Robot Rights, The MIT Press.
López, D. (2016). How Rights Are Constructed, Legis Ediciones S.A.
Address: Calle 5 # 4 - 70 | Popayan, Colombia
Opening hours:
Monday to Thursday from 8:00 a.m. to 5:00 p.m. - Friday from 8:00 a.m. to 4:00 p.m.
Telephone: +57 (602) 820 9900
Toll-free: 018000 949020
Court notifications
Institution with High Quality Accreditation for 8 years, resolution MEN 6218 of 2019 - Vigilada MinEducación
NIT. 891500319-2
Address: Calle 5 # 4 - 70 | Popayan, Colombia
Telephone: +57 (602) 820 9900
Toll-free: 018000 949020
Requests, Complaints, Claims, Suggestions and Congratulations
Email: ventasreclamos@unicauca.edu.co
Judicial notifications
Email:processes@unicauca.edu.co
Institution with High Quality Accreditation for 8 years, resolution MEN 6218 of 2019 - Vigilada MinEducación