The Rise of The Care-Bots

In his 2017 blog-post ‘The Carebot Dystopia: An Analysis’ Patrick Danagher criticized the idea that future care provided primarily by robots would necessarily lead to a dystopian state of affairs. Danagher’s blog-post focused on two papers which painted dystopian pictures of the robot care world: ‘Coecklberg’s ‘Artifical Agents, Good Care, and Modernity’, and Sparrow and Sparrow’s ‘In the Hands of Machines? The Future of Aged Care’. Danagher’s argument, as well as the argument of the papers he critiqued, focused primarily on the care of the elderly; which is an understandable focus. As he correctly noted people are living longer and longer, and the older we get the more likely we develop diminished capacities in a variety of different areas. The elderly don’t necessarily need to be cared for, some people in their 90s live independent lives without any care. Nonetheless as we age we typically require care as a function of some form of disability which may be a consequence of aging. Various disabilities in mobility become more likely as we age, and likewise cognitive disorders such as dementia become more likely as we age. So given that humans are living longer and longer finding better techniques to care for the elderly becomes more important.

While Danagher’s focus on the care of the elderly is understandable, the focus is unnecessarily narrow. Other areas where robot help could become a possibility, and which would raise the same ethical concerns is in the care of the disabled in general. Care-bots can be used to help people with intellectual disabilities and people who are disabled as a result of neurological disorders, strokes, car accidents etc. Since the real focus is on the issue of using care-bots to help people who are disabled; focusing primarily on the elderly, not all of whom are disabled seems to be off target.

One of the key arguments made against the use of care-bots was that such care-bots cannot provide loving care for the people whose needs they are supposed to support, and that they are therefore deficient tools (Sparrow and Sparrow p.17). Sparrow and Sparrow make this claim about the deficiencies of the care-bots a central tool of their dystopian thesis:

“In the meantime, robots are clearly not capable of real friendship, love, or concern- only (perhaps) of their simulations. This has a number of important implications for their involvement in caring roles… as we argued above, robots will by-and-and large not be capable of succeeding in aged-care roles where emotional work is an important part of the role” (ibid p. 17)

 

Sparrow and Sparrow go on to make an even stronger case that since such robots are incapable of actually caring for those they are serving it is morally reprehensible to use them as tools, because it may result in those in care becoming deluded into thinking that the care-bot is a friend. Danagher gave a convincing argument that there is little reason (other than meat chauvinism), to assume that a care-bot won’t eventually be capable of really empathically ministering for those who employ them. While Danagher has a point; the fact is that we have no idea whether we will ever be able to create care-bots that actually care. Rather than engaging in speculation about whether future AI research will ever solve the hard-problem of consciousness; I think we should be asking a different question. The question I think should be asked is; ‘do disabled people actually want their care-bots to care for them?’.

Research in disabilities studies indicates that people may not actually want a care-bot ( or human carer) to act as a friend. Philosopher Eva Feder Kittay in her ‘Learning from My Daughter: The Value and Care of Disabled Minds’ noted; lots of people with disabilities would prefer to view the people who provide their care as professionals doing their jobs, as opposed to people who are in a loving relationship with them:

“Although family members most often fill the role of the paid attendant, the preferences of many disabled, adults, interestingly, do not favor this arrangement. The sense of independence disabled people hope to attain, is according to some, best served by a paid stranger with whom one has a thin relationship of employer to employee instead of the thick relationships of family. In order to sustain a sense of independence-or as I shall suggest, the illusion of independence, the personal attendant must become invisible. In the words of Lynn May Rivas, the carer needs to turn over the “authorship” of her work to the disabled person” ( Kittay 2019 p. 158)

 

To such people the fact that a robot which is helping them doesn’t care for them would not be a draw back; in fact it would be a positive thing. In fact one of the benefits some disabled people would see in the use of care-bots, would be that the care-bots are tools that they can use to increase their own independence. People with disabilities like everyone else have friends and family, and are not looking for care-bots to fill a caring void but rather want tools they can use to increase their independence as they interact with their world.

Missing from the normative claims about the use of care-bots is asking the disabled and elderly whether they would like to use them or not. Asking them whether they would want to use them would need to be done in tandem with ascertaining the degree of understanding of the nature of the care-bots. Instead of asking such questions we get the opposite course where what they want is ignored and we are given a normative argument that such bots are intrinsically bad independent of what the patient wants. Now such normative arguments aren’t bad per se but they do have the draw-back of ignoring the descriptive wants of the people in need.

Sparrow and Sparrow do note the importance of ascertaining whether people actually want to be looked after by care-bots. They even acknowledge (in a foot note at the back of their paper) that there is some evidence that some elderly people do have positive attitudes towards care-bots. Though there is still no conclusive evidence, positive or negative, about what attitudes the elderly, who need assistance, or other disabled people would have towards care-bots.

Sparrow and Sparrow worry that even if the elderly do have positive attitudes to care-bots this may reflect their ignorance of the nature of the care-bots. To avoid this difficulty they suggest a deliberative process where they can debate the pros and cons of the issue and come to an informed decision before they decide what their views on the topic are. I am sympathetic to such approaches; but there is a difficulty that such a process leaves out a substantial proportion of disabled people, who may be incapable of engaging in this deliberative process.

There are an incredible amount of people with disabilities who have either no linguistic competence, or who have linguistic competence which is so limited that it effectively bars them from engaging in the deliberative process. People who suffer damage to the Brocas area of the brain (as a result of accident or stroke) can sometimes lose their ability to produce natural language sentences. These people still have preference and wishes, even if they can no longer express these wishes/preference through language. Similar considerations apply to various different developmentally disabled people who for whatever reason don’t develop a fully functional language. Despite the fact that they cannot speak they can think and have preferences about the things they like in their lives.

Now people like Sparrow and Sparrow would probably agree with me that it is a pity that people without a language are barred from the deliberative process when it comes to the care-bot issue. But they could add that there is little to be done on the issue; if people are incapable of engaging in public deliberation, they are unfortunately barred from the choice in relation to care-bots.

I would argue that it would be a mistake in excluding non-linguistic people from having a say in who helps provide their care. We regularly discover people’s preferences by looking at their behaviour. A person’s behaviour can reveal a lot about what they prefer doing. In a care setting people without language regularly indicate their food choices by pointing to a picture of the food they want, pointing to the actual food they want; or by handing back food they don’t want. Even with people who have a language we can discover facts about them by looking at their behaviour. Thus some people make claims that they don’t like Facebook because of ethical concerns about what it does with our data; nonetheless they spend hours every day playing on Facebook. Their behaviour indicates revealed preferences that contradict their express verbal pronouncements. In the case of non-linguistic people all we have to go on is their behaviour when trying to discover what they want.

We do not know whether non-linguistic disabled people would prefer to have their needs met by care-bots. But it is possible to find out by using some care-bots to help them on a limited basis and seeing if they find the process helpful. If they choose to interact with the care-bots as much as they do ordinary carers this is an indication that they find using the care-bots useful. Now doing studies to discover revealed preferences in relation to care-bots would be a difficult thing to research; and would involve complex ethical considerations. Furthermore, it may turn out that after the study was done it was conclusively shown that people do not like interacting with them. But what if non-verbal disabled do like being cared for by care-bots?

Sparrow and Sparrow are committed to the view that care-bots are by nature unethical. They note that the care-bots have the potential to mislead people about their nature and hence are inherently unethical. People will erroneously think that the care-bots are their friends; all the while the care-bots will be nothing more than uncaring mechanical devices. As we saw above not all people with disabilities want their carers to be their friends so this mitigates some of Sparrows concerns. It is of course possible that some people will be deceived about the nature of the care-bots. But if the care-bots do end up increasing the quality of life of the people they help, and if the people prefer interacting with the care-bots; one could argue that this out-weighs any concerns of people being potentially confused about the nature of the care-bots. Furthermore it is of course possible that people will be confused about the nature of their relationship with the people who care for them; such concerns are not limited to care-bots. So it would be unfair to rule out the use of care-bots because they have the potential to mislead some of the people they are helping.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s