“Machinery Rationality” Versus Human Emotions: Issues of Robot Care for the Elderly in Recent Sci-Fi Works

. In recent years, the phenomenon that robots are used for elderly care in our daily life has drawn attention of the public and the media. It emerges as a new attempt to solve the issue of how to provide for the aged after their retirement. Since 2012, this phenomenon has become the subject of the Sci-Fi movies and TV-play series. In those Sci-Fi works, however, the traditional dystopian human-robot conflicts are partly replaced by the prospect of human-robot co-existence. The robots here launch five ethical challenges: the issues of safety versus privacy, human-robot duality, machinery rationality versus human emotions, affective interaction, and the ethical responsibility for the elderly in the human-robot interaction. This essay scrutinizes the phenomenon of the elderly-care Robots in three recent Sci-Fi movies/TV series, with a focus on the theme of the machinery rationality versus human emotions. In the human-robot interaction, human emotion, which makes us who we are, has been magnified with the perfectly designed robot rationality as a frame of reference. Thus the discussion of the ethical tension and conflict involved within these two typical groups would be particularly imminent and significant.


Introduction
In recent years, with the rapid development of modern AI technology, and as a result of the increasing proportion of aging people in the whole population and the lack of carers, robots have been created to facilitate people's work as man's extended hands and they could help release the caring pressure of the family and society. In reality, they have been put to use already in attending the elderly, the sick and the disabled.
What is practical and useful is not necessarily ethical. That robots are entering our daily life poses ethical challenges, even protests. Here is an extreme case: in the Swedish science fiction (TV-Series) Real Humans (2012/2014), the "human-robot co-existence society" is already established in the post-human era in Sweden and there are some people begging for "Hubot-free elder care" (S.1, E.3, hubot=human+robot, a term for robot in this series). This paper talks about the HRI (human-robot interaction) and the ethical challenges of elderly care robots, more specifically, the characteristics of human beings in front of robots.

A Brief Literature Review
As a budding branch of applied ethics since 2004, the "Roboethics" has aroused a heated discussion among various disciplines. This paper focuses on the most common concerns centering robotics and roboethics especially in aged-care. Gianmarco Veruggio and Fiorella Operto have written an article which traces the genealogy of Robotics and Roboethics and provides a "detailed taxonomy" that "identifies the most evident/urgent/sensitive ethical problems in the main applicative fields of robotics" (1499). Vandemeulebroucke et al. have conducted a systematic review of aged-care robots in argument-based ethics literature, claiming "all stakeholders in aged care, especially care recipients, have a voice in ethical debate" (15). Amanda Sharkey and Noel Sharkey analyze six main ethical concerns in this topic. Sparrow and Sparrow also show similar concern over aged-care robots as "simulacra" that will deprive human beings of real social contact (141-161). Borenstein and Pearson remain a neutral stance towards the deployment of a companion robot (i.e., the seal robot Paro), which may help ease human loneliness and estrangement (277-288). A randomized controlled trial also proves that Paro does possess some edges in enhancing social interactions (Robinson et al. 2013). However, the elderly's attitudes toward Paro are mixed (Robinson et al. 2015(Robinson et al. , 2016. Moreover, Salvini et al. arouse discussions of ethical issues from five case studies in biomedical domains so as to promote human welfare. Lamber Royakkers and Rinie van Est has conducted a review in which they argue care robots will shift the responsibilities of caregivers and give them "a new role" (554). However, the above-mentioned studies are more or less anthropocentric, the major intent of which is to improve human interests while neglecting the diversity of robot characteristics. Only a few researches have reached beyond the scope of social-technological usability of robots and seen a bigger picture from the fictional narratives. For instance, from a psychological perspective, Elizabeth Broadbent brings the discussion of fiction and reality into public attention. She slightly mentions the film Robot&Frank (2012), in which the care-robot has been given far greater potentials than they have in real life (629). Potential threats that these care robots can bring to humans are also highlighted (629-646). More specifically from the perspective of Sci-Fis, in Roboethics in Film, a collection of essays that concerns some key ethical issues in human-robot relationship is anthologized. It is one of the few works that touches upon roboethics on the screen, though is incapable to reveal some new phenomena of robot Sci-Fis (e.g. aged-care issue).Also notably, Norman Makoto Su et al. discuss robot issues and try to understand the definition of a healthcare robot through a lens of online YouTube videos, which is also a good attempt to approach the ethical issues of robots.
Unlike what has already been done, this essay scrutinizes the phenomenon of the aged-care robots in recent Sci-Fi works, that is, the potential ethical challenges that may come along in the human-robot relationship and specially the issue of machinery rationality versus human emotions. First, why Sci-Fi works are worth exploring here will be explained as follows.

Five Ethical Issues in the New Era of HRI
The phenomena of elderly-care robots in fictional movie/TV plays and non-fictional reality have caused different ethic responses. The Sci-Fis with robots can offer various pictures of HRI (human-robot interaction) and human-robot co-existence which have a potential connection with reality. To some degree, they are forward-looking, inspirational, public and approachable for every audience. There is no doubt that Sci-Fis can illustrate the richness, diversity of robotics and the topics for futurology, and thus broaden our cognitive space. Moreover, they have fixed texts with great imagination and openness, serving as ideal research resources. Therefore, they are not something unserious and far from reality, but ideal research materials for the in-depth explorations and thus are irreplaceable for this field.
Since 2012, approximately at the same time with the usage of elderly-caring robots in the nursing house, the phenomenon of the thriving of elderly-care robots in the domestic use has been represented in the fictional movie/TV plays. Here is a list of the Sci-Fis with elderly-caring robots as main topic or one of the main topics: This paper discusses the phenomenon of the aged-care robots in the first three Sci-Fi works. The important aged-care robot roles are e.g.: The anonymous health care aide in Robot&Frank, the simple-minded and old-modeled Odi, the high intelligent and mechanically rational Vera in Real Humans (Season 1) who takes care of Lennart, and her equivalent who takes care of Dr. Milligan in Humans (Season 1). They are the agedcare robots this essay focuses on. Frank's robot and Vera are the representational crystallization of human's pure rationality.
They are not restricted in the factory or in a fictional dystopian world, but stepping into people's daily life. They are not the stereotyped images of slaves or monsters any more, but helpers and companions. This change means robots' gradual acceptance by the public, and the beginning of a new era of the HRI. These works present three characteristics of the HRI in a new world of human-robot co-existence: • Room/Space: The HRI doesn't happen in a dystopian imagination, which is far from social reality, but in an intimate daily life discourse. • Time: The elderly-care robots are not an imagination of a far future, but of a near future, for instance, the film Robot&Frank presents a picture of "the near future". • Characteristics of robots: They are neutral intelligent humanlike machines and do their duties according to their programs. They are free from the traditional concepts of being "either good or evil", and will not arouse the all-or-nothing response.
The above-mentioned works positively narrate and foreground the human-robot coexistence over the narration of robots as dangerous and menacing machine. The traditional human-robot duality is challenged. The traditional HRC (human-robot conflict) and dystopian picture cannot always meet the audience's taste any more. However, still there are five ethical issues involving the elderly-caring by robots. Briefly they are: • From the challenge of safety to the challenge of privacy; • From the human-robot duality to the their co-existence and co-operation; • The machinery rationality versus the human emotion; • The possible problems of the overloaded affective interaction; • The ethical responsibility for one's elderly parents in the HRI.
This essay focuses on the issue of mechanical rationality versus human emotions, and also relevant issues arising from the human-robot interaction.

2
The Machinery Rationality versus Human Emotions

The Elderly as an Irrational Group
Sometimes, as the elderly step into the last phase of life, with their long-term stable living habits, some of them become very sage and sensible. But some of them have also gradually developed a kind of emotional and even irrational personality, which is called "the old turning into the child" in Chinese culture. However, this might not be a bad thing, sometimes it might even have its positive side. Yet we cannot deny that such irrationality will do harm to their health and will bring the elderly into conflicts with the robots, who possess only "rationality".
In the studied Sci-Fis, the elderly's irrationality has been fully revealed through their communication, interactions or even conflicts with robots. Frank said to Robot: "I would rather die eating cheeseburgers than live off of steamed cauliflower."(TC:17:12-17) Some old people no longer regard life or health as the most important thing when they reach old age or approach death; they have no longing for the future and lack motivation for the present. However, they are very nostalgic and highly cherish the good old days, their old habits, relationship and memories. The old-modeled robots are not only keepers of the shared memories (they can sometimes clearly revoke the forgotten memories by the elderly), also they are engaged in the memories themselves, which make them unique. Humans tend to regard robots as "tools", which is a deeprooted concept according to human logic and rationality; yet in this case, such definition for robots does not apply to the elderly (Frank, for instance).
Keeping healthy and cherishing life is common sense for humans; however, not for the elderly discussed here. For example, regardless of Odi's malfunction, Lennart stubbornly and capriciously insists driving out with him, merely to recall the reminiscence of their hanging out and fishing together (his daughter Inger would not allow Odi to drive). Lennart does know Odi has been acting weirdly. Once Odi almost locked Lennart's head in the car boot, and can no longer drive smoothly. In fact, they do encounter a car accident (Real Humans, S.1, E. 3-4). Audience can understand the elderly's sense of nostalgia, yet as offspring, they will not tolerate such irrational behaviors of the old.

The New Conflicts between Human and Robot
Not only in the European literature, but also in the western Sci-Fi movies, it is very common to see the "threat posed by artificial entities" (Veruggio 1503) or conflicts between human beings (Lord) and robots (tools/slaves). For instance, in Robot&Frank, Frank reminds his daughter: "This robot is not your servant" (TC 51:48-51). In Real Humans and Humans, there are people who abuse the robots. This is one side of the conflict. On the other side, human beings are afraid of the rebellion of the robots. In Robot&Frank, Frank tells his son who brings him the robot: "That thing is going murder me in my sleep." (TC 10:09-12) This is called "Frankenstein complex" by Isaac Asimov (in his novel That Thou Art Mindful of Him) because of the famous novel of Mary Shelley's Frankenstein; or, The Modern Prometheus. Therefore, Asimov proposes the "Three Laws of Robotics" in his short story Runaround: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (44-45) In the Sci-Fis that covered in this paper, the conflict between human and robot is still inevitable along with their partnership and companionship. It has shifted from a survival game of life and death between human and robot (which is closely related with the 1. Law) to a tension between machinery rationality and human emotions (2. Law).
In Real Humans and Humans, unlike the good old friend Odi who earns the old man's love and trust, the new high intelligent aged-care robot, Vera, feels superior to others, and persists what she proposes is the best for Lennart. Her stubborn rationality which has been inserted in her program causes tension between her and the old man, who is dominated by emotions, caprice and irrationality. Here comes the key question. Where is Vera's mechanical rationality from? According to Wallach and Allen in their influential book Moral Machines. Teaching Robots Right from Wrong (2008), there are three ways of machine moralityacquisition: Top-Down Morality, Bottom-Up and Developmental Approaches, Merging Top-Down and Bottom-Up (83-117). Here, Vera mainly needs to obey the Top-Down Morality (which is designed by humans, aiming to take care of the elderly and improve the sense and bodily functions of the elderly)it is of great significancethat the rationality of robots comes from humans, and is designed by pure rationality of the latter. Besides, Vera and the robot with Frank have developed a few senses and skills from Bottom-Up, to some degree. They have some potential of self ethical judgement and willpower of execution. For instance, the robot can cheat Frank into a healthy diet by lying to him that if Frank doe not obey, as a robot, he will be returned and recycled, which is a white lie apparently.
Back to the conflict between machinery rationality and human emotions. Take Humans for example. Vera tries to force the old widower Dr. Milligan to form good living habits to keep him healthy, but he does not like it. He insists on his own life style. To make sure he rests well during the night, Vera even has to escort and force him to go back to sleep. Thus the ethical dilemma occurs: To do Dr. Milligan good, Vera has to impose on him, but he complains and claims: "You are not a carer, you are a jailer!" (S.1, E. 2, TC 25:55-26:01). This may remind the audience of a similar scenario in I， Robot (2004), in which a robot imprisons a human being in order to "protect" him.
Judging from the elderly' sense of dignity and life quality, this conflict between robots and humans does lead to "an increase in the feelings of objectification [… and] a loss of personal liberty" (Sharkey 27), and even put the elderly at risk. If we take the "Consequentialism" (Veruggio 1505) as the principle of the elderly-caring by robots, then the question is: between guaranteed personal autonomous liberty and robots' sometimes compulsory service, which is more essential to human beings? How to pursue and keep such a balance between the two? The health and life on one hand, and the feelings and the old habits, wishes (personal liberty) on the other. Both are important for the quality of the life of the elderly. Under the circumstance that robots are more rational than humans, should they still obey humans' orders? If not, humans will be panicked; but if the orders given by humans are irrational and the robots obey, they will "injure a human being or, through inaction, allow a human being to come to harm" (which violates the First Law). The conflict between machinery rationality and human emotions leads to ethical dilemmas which could be barely solved. These are the fundamental questions which will serve to design a better ethics model for the future HRI. The decision is not only up to the designers of robots, but also to all people involved.

Robot as Antithesis: The Irrationality of Human Beings
Commonly conceived as a human's replica, substitution and companion, robots can be served as an antithesis, mirror or prisma, which allow humans to better see and define themselves. With robots as a frame of reference, following, we continue to scrutinize human beings' irrational and emotional characteristics and behaviors.

Robot: Not "It" but "He"
In traditional concept, robots have nothing to do with consciousness or emotions. Though some fictions from earlier period have touched upon this theme and endow them with feelings, "robot" has long become a metaphor which symbolizes a lack of emotions. Both in fiction and in reality, usually the elders were unwilling to accept robots as companions at first. For instance, in Robot&Frank, when Frank encountered the robot for the first time, he said, "I'm not this pathetic. I don't need to be spoon-fed by some goddamn robot" (TC 10:34-40). And he called the robot rudely as "goddamn robot" and "death machine" (TC 10:25-38).
However, even though humans know exactly robots are not human, are lifeless and emotionless, they would still show sympathy and attach emotions to the robots (humanlike artificial creatures), particularly for the elderly. In terms of whether or not a robot should be referred to as "he" or "it", Lennart insists that it should be "he" and he tends to anthropomorphize the robot. Also, Frank says: "He [the robot] is my friend!"(TC: 52:18-19) , even though the robot continuously reminds him that it is but a robot, Frank still regard him as a friend and cannot manage to wipe the memory which they shared. The same with Lennart: his daughter Inger says, "Odi is not human, it's a machine" (S.1, E. 1, TC 13:52-54), yet Lennart asserts that her daughter should use "he" to refer to his robot companion Odi (TC 13:45-48). Though they got into conflicts with highintelligent robots, Lennart and Dr. Milligan treat the old-styled and simple-minded robot Odi as their dear family member. That is because Odi knows their personal habits, likes and dislikes well, and helps to record their precious memories.
In the future, this "he or it" debate remains controversial. In fact, the children (a yetto-be enlightened group) tend to be irrational and have more often referred to inanimate object as animate. As Sigmund Freud pointed out in his influential theoretical Essay The Uncanny (1919), some children cannot "distinguish at all sharply between living and lifeless objects, and that they are especially fond of treating their dolls like live people." Freud went on: "a woman patient declare that even at the age of eight she had still been convinced that her dolls would be certain to come to life if she were to look at them in a particular way, with as concentrated a gaze as possible." (9) The difference in Lennart's case is that, though he clearly knows his robot is but a lifeless thing, he still insists it is "he". This in turn highlights the elderly's extreme irrationality.

The Intimate HRI and the Risk
To many people, too much interaction with robots can cause alienation. Without making sure that robots possess moral agency or not and even knowing that robots are but human-shaped yet totally different, humans still regard them as with life and attach emotions to them. The problem is, where is the boundary in human-robot interaction? From the Sci-fis studied here, excessive emotions attached to robots may cause negative effect of alienation in human beings like Sophie in Humans, who deeply relies on the hubot Anita and imitates her behaviors. Unlike children, the elderly are more mature and have stronger yearning for emotional exchange. In the three Sci-Fi works mentioned, an intimate human-robot relationship does not cause cognitive displacement for the elderly (not as serious as in the case of Sophie). In Real Humans and Humans, Lennart and Dr. Milligan regard the outdated Odi as their close companion, for he will not force them to do anything and help them keep their most cherished memories.
It is common knowledge that keeping a certain degree of interpersonal relationship is good for the mental health of the elderly. Yet in the discussion of the Sci-Fis here, it is open to debate. Though the elderly are inclined to an harmoniously intimate and personal human-robot interaction, we cannot overlook some potential problems: Even if as what the elderly have insisted, robots have emotions; it is but artificial. Can it be counted as deception for the elderly? (Li 564) Can the mass-produced robot companions become the elderly's real friends? The ending of Robot&Frank also suggests similar concerns. From those Sci-Fi works, we learn that it depends on specific context to decide whether or not to treat robots as slaves (or tools), and yet even if the old knows robots do not possess life, they still tend to regard them as their real friends and companions. This sense of intimacy and belonging makes them feel good and mentally healthy. It might be mutually beneficial. Interestingly, this irrationality of humans becomes the prerequisite of a harmonious human-robot relationship (like Lennart, Dr. Milligan and Odi).

The Irrationality of Human Being in the Mirror of Robots
Beatrice, an insidious hubot who wants to populate the planet one day, said to Mimi, a domestic hubot who has been accepted by the human family Engmans as follows: "We will never die. We are going to rule" (Real Humans, S. 2, E. 1, TC 56:26-28). She went on: "You have ability to develop feelings. Do not join the people. They are controlled by their emotions" (TC 56:14-21). Beatrice is not an aged-care robot, but from her words, we can tell emotions and feelings are human characteristics.
In those Sci-Fis, in the HRI and in front of robots, we can observe humanity more closely and clearly. Here the aged-care robots do not pose the question, "who am I", but the human characters or audience can better see and define themselves through this reference object. Aristole brought up the philosophical proposition that "Man is a rational being" in Politics. Once placed in the frame of human-robot, the elderly are far less rational than robots. The latter are consistently and absolutely rational compared to humans. Robots can persist in the rationality designed by yet fails humans. In this regard, the design of rationality is unique to humans, but humans are not good at keeping their reason. From the TV series of Real Humans and Humans, we can see in front of the reference object, an absolute rational "being" (robots), the audience observe more keenly the impulsive side of the elderly, their personal preference and their weighing emotions over rationality. And yet, this is exactly what makes us humans.
In Yuval Noah Harari's most recent published public reading, the finale of the popular trilogy, 21 Lessons for the 21 st Century (2018), he talks about a different but essentially similar logic: humans study philosophy and discuss over ethics and rationality, yet when it comes to ethical decisions to make within a moment, only human emotions and intuitions will work. It is the algorithm of a computer, rather than human beings that can persist absolute rationality (53-58). So do machines and robots.

Conclusion and Prospect
In the recent Sci-Fi works with aged-cared robots, we can find out a new trend on the Sci-Fi screen: The human-robot conflict has shifted from a survival game of life and death to a tension between machinery rationality and human emotions. From this study on the tension between mechanical rationality and human emotions, we can draw two concluding points: 1) Machinery rationality is pure rationality designed by humans, or robot rationality is the representational crystallization of pure rationality of human beings, which forms a drastic contrast with human emotions, especially the emotions of the elderly; 2) The design of rationality and the discussion of ethics is unique to humans, but humans are not good at staying rational. From the conflicts involved here, robots become the reference frame of humans, from which we can more keenly observe our irrational side, that humans highly regard emotions and can never absolutely persist in rationality. This is exactly what makes us humans. Thus, probably we should consider what we can learn from robots as a frame of reference. The discussion of the ethical tension and conflict between the pure rationality of robots and the emotional irrationality of the elderly would be particularly imminent and significant, which could serve to design a better ethics model for the future HRI.
Robots can continuously urge us to keep exploring, redefining and knowing ourselves. Meanwhile, we should be psychologically, emotionally and ethically prepared for the possible future of human-robot co-existence. When this kind of society really comes true, we should build a world with the elderly' life quality guaranteed and avoid the feeling of alienation or uncanniness before and with our artificial double.