Having a digital presence made up of social media profiles has become the norm, but the concept is being extended into the realm of digital and robotic avatars. Can artificial embodiments of the self be held to the same standards as their physical users, and can the law effectively legislate them?
Investment in robotic avatars is growing. The market for personal robots is expected to reach roughly $18.85 billion by 2020, of which a significant portion will be made up from robotic avatars. The XPrize Foundation, for instance, has recently announced the ANA Avatar XPRIZE - a $10 million prize for those that can create an avatar “that will transport a human’s sense, actions and presence to a remote location in real-time, leading to a more connected world.”
It won’t be long before we begin to see avatars in medicine, for instance. A number of European initiatives have been put together to focus on the topic. In 20 years, it may not be unusual for a tetraplegic person to have a robot avatar. Someone might be at home, lying on their couch while doing grocery shopping at the same time through her avatar in a supermarket nearby.
There are risks here, though. It’s easy to imagine a robot avatar being attacked, for example, whether in an attempt to steal the groceries or as an act of random vandalism. Violence towards robots is unavoidable and there is no reason to believe that avatars will be treated any differently.
“Virtual reality may appear to be a safe environment, but this might not necessarily be the case”
This may also happen in virtual environments as well as the real-world. Imagine a teenage girl that has an avatar on a popular social network. She has made it to resemble her and she may interact with other avatars on the virtual platform. Virtual reality may appear to be a safe environment, but this might not necessarily be the case.
Avatars controlled by much older men, for example, could well take advantage of the anonymity that the platform offers to sexually assault the girl’s avatar. Two male avatars have already assaulted a 7-year old girl’s avatar in the United States while she was playing the game Roblox.
Where does the actual responsibility for these offences of the future lie and why should it matter?
What is Avatar Embodiment?
Avatar embodiment is a feat of technology through which the user can experience the body of an avatar as if it was their own. Avatars can be digital or physical - the former is used in virtual reality and they ordinarily look something like this. Another fascinating form of embodiment, though, is the one done with physical avatars, using humanoid robots.
Avatar embodiment is achieved, in part, with the use of head-mounted displays - devices worn on the head that provide the wearer with the avatar’s perspective. This is combined with body-movement synchronisation between the user and the avatar so that, when the person moves, the avatar moves accordingly.
“People often experience what has been called a ‘sense of presence.’ This is the feeling of being there, in the same location as the avatar”
During these embodiments, people often experience what has been called a “sense of presence.” This is the feeling of being there, in the same location as the avatar.
An important collection of studies in the field of Media Psychology have examined how people behave during avatar embodiment, both in robots and virtual reality. They show that users not only sense that they are there, but also respond to objects, virtual humans and threats in these environments as if they were real.
The scientific literature lends strong support to the idea that, during experiences of mediated embodiment, the sense of self extends to the body of the avatar. Somehow, the avatar becomes part of us during these experiences.
Should the Law Care?
Some people may argue that a lot of what people can do in reality cannot happen in a virtual environment. For instance, you will not die if someone kills your avatar. Similarly, imagine if two users are interacting in a virtual environment - one of the users is in Australia and the other one is in Europe.
The distance between the users would make the enforcement of any law difficult - how could I find this person in Europe? A lot of offences, like disturbing the peace, for example, make little sense in the realm of virtuality.
Some may even argue that there is little evidence of robot avatars being attacked and that, if such attacks exist, the stories have been exaggerated. Look, for instance, at these children ‘abusing’ a robot, or at the HitchBOT being ‘murdered’. The HitchBOT could have been someone’s avatar, and it could have been embodied and controlled remotely by a person located elsewhere.
Defamation from 5,000km away will continue to be defamation, much the same as copyright infringement. Think, for example, about data protection and privacy issues, the security of the avatar, or other boundaries of the use of such devices which could cause legal difficulties.
Mediated embodiment experiments have given scientific evidence that avatars are an extension of the self and not a mere tele-operated robot or virtual projection. If then, an avatar gets attacked, the person who embodies that avatar might suffer psychological harm as well. Actually, there is a growing discussion on virtual sexual assault and virtual rape from a legal and ethical perspective.
“The legal and ethical consequences of the development of avatars need to be taken seriously, given that avatars are not only mere robots or digital projections but also akin to surrogate bodies”
The legal and ethical consequences of the development of avatars need to be taken seriously, given that avatars are not only mere robots or digital projections but also akin to surrogate bodies. Although there may already be laws that apply to mediated embodiment, the lack of a clear legal framework that governs these technologies creates legal uncertainty.
If the Robot Harms Someone, Whose Fault is it?
In the legal domain, there is a growing area of research that talks about the so-called responsibility gap. The responsibility gap theory suggests that, if robots learn as they operate and the robots themselves can, in the course of the operation, change the rules by which they act, then there is no reason why humans should be held responsible for the autonomous behaviours of the robot. On the contrary, robots are the ones that should be held responsible for their autonomous decisions.
This has lead many legal scholars to reflect on which metaphor would best illustrate this concept. In other words, whether precedent can be found within the existing law that we ascribe to non-human things:
Corporations. Corporate personhood means that corporations enjoy some of the legal rights and responsibilities granted to humans
Ancient-Rome slaves. Ugo Pagallo explained in his book that, in Roman law, there was an institution called peculium for those transactions made by the slave, at that moment considered a thing, in the name of the master. He argues that the robot and the responsibility could be inspired by this institution and be called digital peculium.
The European Parliament has recently positioned itself on the matter. It has called on the European Commission to explore the implications of “creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently.”
Questions to be Answered
Does this legal status apply to robot and digital avatars? If robot and digital avatars are an extension of the self, are the persons themselves the ones responsible for any harm caused by their avatar?
This may also apply to robots that, rather than being the alter-ego avatar of the person, are simply remotely controlled. Indeed, the control of a robot might be more efficient if it is divided between the user and the robot itself in a shared autonomous mode.
In the case of avatars, users are normally in almost complete control. Should, then, the responsibility of certain robots remain with the operator of the robot? If the robot is a product, should it then be governed using product liability?
My, Myself and my Avatar
If the sense of self extends to the avatar, the user will share its experience. Avatar embodiment experiences, then, have a host of legal and ethical considerations. These challenges need to be further discussed from an interdisciplinary (involving media, psychology, law, engineering and ethics viewpoints) and inclusive (users, platforms, developers) perspective.
We should think carefully about the role we have, as a society, in shaping the use and development of mediated embodiment technologies. We should start reflecting on the legal and ethical implications of mediated embodiment technologies along with the risks of being immersed in an environment where the real and the virtual are blurred.
About the authors
Dr. Fosch-Villaronga is a Marie Skłodowska-Curie Postdoctoral Researcher in the eLaw Center for Law and Digital Technologies at Leiden University in the Netherlands. His area of expertise is Law, applied to emerging robotics technologies. He mainly investigates the legal and ethical issues concerning robot technologies with a special focus on healthcare; and he is interested in smart regulation, human-robot interaction and the future of society.
Dr. Aymerich-Franch is a Ramón y Cajal research fellow in the Dept. of Communication at Pompeu Fabra University, Barcelona, and a visiting scholar at the Personal Robots Group of the MIT Media Lab, Massachusetts Institute of Technology. Her area of expertise is Media Psychology applied to emerging communication technologies. Principally, she works with social robots and virtual reality. Her research promotes the positive applied uses of these technologies in the area of psychological well-being.
The matter of an individual’s digital presence living on after death has become a contentious issue in the age of social media profiles. Facebook offers the option to convert your profile into a memorial page after you die, while third-party apps such as ‘If I Die’ allows users to schedule a video or post to be sent out from their account after the event.
In the developing age of avatars, the issue is becoming more complex. Startup Eternime is offering a service that uses customers’ social media posts and other personal material to create an AI-powered avatar of themselves. This can be activated after a person’s death in order to continue communicating with their friends and family, thereby achieving apparent digital immortality.
Illustrations by Kseniya Forbender
To contact the editor responsible for this story:
Margarita Khartanovich at [email protected]
- Artificial Intelligence Isn’t Ready to Take Over From Doctors and Nurses, Just Yet
- Machine Learning Vs. Analysts: Will AI Eventually Replace Data Scientists?
- Why The Danger of Deepfakes Is No Danger At All
- Sarcastic Robots? How Deep Convolutional Neural Networks Are Making AI Worryingly Human
- Traffic Lights in The Sky: Flying Cars Can Appear Sooner Than You Think