Monthly Archives: June 2020

Le Doux, Jablonka, the Cambrian Explosion and Anxiety

“Survival behaviours thus have very old roots that make them universal. But the kinds of experiences humans call conscious feelings-that is emotions-I propose are a much more recent development, possibly emerging via evolutionary changes in the human brain a mere few million years ago” (Le Doux ‘A Brief History of Ourselves’ p.4 )

In Jablonka and Ginsburg’s[1] recent book ‘The Evolution of the Sensitive Soul’, they argued that once animals developed an ability called ‘Unlimited Associative learning’ as a by-product they would acquire sentience. Their fascinating and detailed story attempted to discover how sentience evolved and traced it back 550 million years ago to the Cambrian Explosion. While the story they told involved detailed scientific analysis it also had a dramatic feel. In the world of the Cambrian explosion you not only had the most complex predator/prey interaction in the history of the world up to that point; but according to Jablonka, creatures of the Cambrian explosion were creatures whose lives involved constant anxiety. These creatures were the first creatures to have feelings about the world they inhabited; however, they had not developed systems that helped them manage these feelings. So, they lived in a state of constant fear. Jablonka goes as far as to imply that such creatures lived in an emotional paranoid state that was a-kin to psychosis:

“The Cambrian probably witnessed the first nervous breakdowns: the first Cambrian subjectively feeling animals would have been nervous wrecks, with the equivalents of paranoia and posttraumatic stress disorder (PSTD) Dominating their mental lives (‘The Evolution of The Sensitive Soul p. 430)


Jablonka’s grim but dramatic story portrays the Cambrian Explosion as a kind of subjective nightmare for its inhabitants.

The story she tells in one that has interest for the philosophy of mind, biology, neuroscience etc. Another area where her story of the evolution of consciousness will have importance is in the subject matter of psychiatry. When scientists try to understand human suffering, and mental pathologies, they partially rely on models derived from the study of other mammals.  The assumption is that such mammals because they have similar brain structures as humans; provide reliable but fallible ways of understanding the neuroscience of human suffering. Scientific studies of mammals in various different threatening situations indicate that they behave in ways that are similar to the way humans behave (the fight or flight response), and that their brain states when they engage in fear behaviour, are similar to our brain states when we engage in fear behaviour. In all mammals the walnut area at the centre of the brain (The Amygdala) plays a key causal role for in their fear behaviour. For this reason when psychiatrists and psychologists are trying to understand psychological disorders such as anxiety disorders, or phobias they heavily rely on animal models.

The reason that we rely heavily on animals when trying to understand human behaviour is that it is easier to study animals directly than it is to study humans. There are obvious ethical constraints on doing invasive experiments on humans. When it comes to studying other animals, millage varies on the degree to which we can study them invasively. Humans are typically reticent when it comes to studies on animals who they have emotional attachments to; such as fellow primates e.g. Chimps, or Dogs.  Such reticence will be tempered when experiments seem likely to cure diseases that plague humanity such as cancer or various neurological disorders. But unless there seems to be a direct practical benefit in doing invasive experiments most humans would deem it an unjustified practice[2]. Hence the majority of invasive research on animals is performed on animals who humans have a less close attachment to; Rats are a paradigm experimental animal. As mammals they have brains which are structured similarly to our own and Rats don’t typically trigger our empathy; hence they are the go to animal neuroscientists study when trying to understand the neuroscientific nature of our subjective experiences.

Studies of Rats indicates that not only do they have brain areas structured similarly to our own but they also have behavioural patterns similar to ours. Jaak Panksepp has argued based on behavioural and neuroscientific evidence that all mammals share with humans seven basic emotional systems: (1) Fear, (2) Panic/Grief, (3) Seeking (4) Lust, (5) Care (6) Play (7) Rage. There is little debate with Panksepp’s characterisation of the similarly structured brain states and behavioural patterns in all mammals. The issue is with whether all mammals actually subjectively experience fear when they engage in fear behaviour. For Higher Order Theorists of Consciousness like Joe Le Doux there isn’t sufficient evidence for us to attribute conscious states to non-human mammals as they lack certain cortical features that Le Doux deems necessary for a state to be conscious.  In his book ‘Anxious’ Le Doux argues that the ineffective nature of contemporary neuro-pharmacological treatments for mental illness stem from the fact that theorists think that the drugs are attacking conscious feelings; when in fact the drugs are merely attacking behavioural responses (a rat freezing) that may not be accompanied by subjective states of any kind.

Given that the debate as to whether all mammals have subjective experiences has a practical bearing on subjects such as neuropharmacology; Jablonka’s argument that subjective experience predates mammals by millions of years isn’t just a theoretical but a very practical question.



Ginsburg and Jablonka’s Evidence

“Using these criteria, we identify four new transitions in the evolution of neural animals: the transition from multiple exploratory and directed reflexes to limited associative learning; the transition  from limited associative learning to UAL and minimal consciousness;  the transition from imagination to symbolic representation and communication” ( ‘The Evolution of the Sensitive Soul’ p. 343)  

Like Le Doux, Jablonka focuses on the evolution of movement and on creatures who have the capacity to move towards nutrients and away from danger. Even simple Bacteria have this capacity. Jablonka agrees with Le Doux such simple creatures probably do not have sentience.  Jablonka’s story diverges from Le Doux’s story in that she argues that with the evolution of Unlimited Associative Learning creatures develop consciousness, whereas Le Doux argues that higher cortical areas are the magical sauce that creates consciousness.  However, before discussing Unlimited Associative Learning, I first need to outline the nature of its simpler ancestor Limited Associative Learning.

Limited Associative Learning: Skinnerian Organisms

The evolution of Limited Associative Learning[3] provided a massive advantage for creatures who had it. The advantage it provided can be seen in the amount of living creatures today who still use it as a form of learning. Jablonka notes that as well as fish, amphibians, reptiles, birds, and mammals, even invertebrate groups including nematodes, Platyhelminthes, crustaceans, chordates, arthropods, annelids and molluscs display LAL (ibid p. 294).  She also notes that all animals that demonstrate evidence of limited associative learning are bilateral, have brains, and probably evolved in the Cambrian era (ibid p. 293).

In order to understand LAL one must first understand how it relates to conditioning. Since Pavlov discovered classical conditioning and Thorndike and Skinner discovered operant conditioning there has been much debate on how they relate to each other. The primary question that people have wanted to resolve is which is more fundamental; classical or operant conditioning? The consensus; backed more by tradition than evidence, is that classical conditioning is the fundamental form of conditioning and operant conditioning was derived from classical conditioning.

Jablonka, parses classical conditioning as a type of perceptual learning, and operant conditioning as a kind of world learning. In classical conditioning a particular stimulus (food being presented), causes an unconditioned response (salivating). By pairing the food stimulus constantly with a neutral stimulus (a bell ringing) the animal eventually develops a conditioned response of salivating when the bell rings. In this experimental setting the animal is being passively conditioned to associate one stimuli with another stimuli. In operant conditioning the animal is more active. The animal engages in a type of behaviour and this behaviour have certain consequences. A form of behaviour which becomes more frequent as a result of a certain class of response is said to be positively reinforced.

Jablonka noted that despite the distinction between classical and operant conditioning seeming to be very clear; it is a hard distinction to parse from an experimental perspective. Pavlov’s experiments did stop the dog from exploring their environment and so to a certain degree did manage to focus on the dog’s perceptual associations instead of consequences of the dogs’ exploratory actions in the world. Jablonka argues that despite some limited success (Colomb and Brembs 2010), in disassociating the two different forms of conditioning; in reality they form a continuum. Operant conditioning is a type of conditioning involved in self-learning; which involves assigning value to a particular action or movement, and classical conditioning is a form of  world learning which involves assigning value to sensory stimuli (ibid p. 298).

Jablonka defines LAL as follows:

“We define LAL as conditioning that includes both self and world learning and involves the formation of predictive relations between non compound stimuli, actions, and reinforcers” ( ibid p. 321)

To explain LAL Jablonka leans on the phenomena of blocking which she argues demonstrates an organism inhibiting the effects of learning irrelevant stimuli. The psychologist Leon Kamir first studied blocking, where the presentation of a new CS along with an already perfectly predicted CS fails to support new learning (ibid p. 322). Jablonka describes this in terms of an organism predicting reinforcement. She situates this in the prediction error paradigm. What an animal already knows (certain smells lead to food), doesn’t need learning, but if an animal learns that (a bell ringing) also lead to food this will be a ‘surprise’ and will require the animal to update its model of the world. Jablonka parses this as an animal constantly generating predictions about the value of the stimuli it encounters. With a novel, surprisingly reinforced input the most efficient strategy is to adjust the predictions to minimize future surprise (ibid p. 322).

Adopting this approach to LAL can help us explain the nature of blocking:

“A model of blocking posits that the effect of the new concurrent stimulus Y is inhibited because there is no difference between the reinforcing effect of X and that of XY. This makes evolutionary sense: mobile animals with LAL encounter many incidental that co -occur with and already reinforced stimulus but do not contribute to reinforcement, and learning about these stimuli would be a waste of time and energy.” (ibid p. 323).

                          Unlimited Associative Learning and Minimal Sentience


Jablonka argues that UAL has four key features:

(1) Hierarchical processing, which enables compound perception and compound action.

(2) Integration between perception and action models.

(3) Memory for compound percepts.

(4) A flexible global reinforcement system. (ibid p. 360)

In essence UAL involves a creature being able to string together non-reflex eliciting input such as a specific conjunction of several features (green, stick shaped, large), or the stringing together of a non-reflex series of actions. For Jablonka with the arrival of UAL we get creatures with the capacity for minimal consciousness. But a problem that she associates with this arrival of consciousness is that unlike with LAL, UAL doesn’t have a built-in capacity to stop overlearning. Hence, she argues these creatures were not only capable of minimal sentience these creatures were capable of severe anxiety.

A Few Brief Criticisms

A key problem with the story that Jablonka tells is that while her behavioural capacity story is well justified her projection of consciousness goes beyond the evidence. Detailed studies over the last fifty years give us strong evidence that intuition aside complex behaviour is not a reliable indication of consciousness. Le Doux recounted some of these reasons in his recent book ‘The Deep History of Ourselves’: (1) Split Brain Patients: when the corpus collosum is cut in human subjects stopping the left hemisphere from communicating with the right hemisphere. Subjects respond to commands that are flashed to the left hemisphere (the non-verbal part of the brain). If ‘Stand Up’ is flashed to the subject then they will stand up. When asked why they will give a confabulated reason. One interpretation of this is that the behaviour of ‘standing up’ is an unconsciously controlled behaviour. (2) Blindsight: After a stroke patients are blind to things on the left side of their visual field. Although the patients claim they cannot see on their left hand side of their visual field they can accurately grasp for objects on this side and perform better than chance on experiments indicating that they were perceptually registering objects on this side even though they had no phenomenal consciousness. Again, this indicates that some complex behaviours don’t need consciousness to perform them. So why assume that creatures in the Cambrian Explosion were conscious? (3) Patients who had their hippocampus removed causing global amnesia. Such patients when giving verbal reports indicating they were not updating their memories. Nonetheless these patients retained learn complex motor tasks. So despite their conscious memory being destroyed they could still learn procedural tasks. Indicating that their complex motor behaviours need not be conscious (Le Doux ‘The Brief History of Ourselves pp 266-268). Overall what these studies indicate is that complex motor behaviours do not require consciousness. So Jablonka’s assumption that creatures in the Cambrian who had UAL also had minimal consciousness is somewhat of a stretch. Jablonka hasn’t done enough to distinguish unconscious motor behaviour from consciousness driven behaviour.

At this stage the empirical doesn’t conclusively support or refute Jablonka’s claims. To use a tired cliché; more research is needed. A possible way out of this impasse is in Le Doux’s claim that a central weakness with neuro-pharmacology is that it is based on animal models that track fear behaviour instead of the feeling of fear. In his book Anxious Le Doux suggested a different way of understanding Anxiety. If future neuro-pharmacological studies based on Le Doux’s recommendations are more pragmatically successful than current techniques, then this would be evidence that his theory of consciousness is superior to Jablonka’s theory.

Once this data is in we will have more evidence to help us decide between the competing pictures of the Cambrian Explosion. We will have data that will help us decide whether the creatures of the Cambrian Explosion were perpetually horrified creatures like characters from a David Lynch Film or whether they were merely beautifully designed unconscious robots competing with each other.

[1] Jablonka and Ginsberg are co-authors of ‘The Evolution of the Sensitive Soul’ throughout this blog-post for ease of reading I will refer to Jablonka as short hand for Jablonka and Ginsberg. I use Jablonka’s name as opposed to Ginsberg because I am more familiar with her work. But the post is about the book they co-authored.

[2] Obviously experiments still go on today on Chimps, Dogs etc. and sometimes these experiments are done for commercial profit and not to cure diseases that plague humanity. However, overall scientists need to justify their experimental practices on animals now to a greater degree than a hundred years ago.

[3] Henceforth Limited Associative Learning will be referred to as LAL.

The Rise of The Care-Bots

In his 2017 blog-post ‘The Carebot Dystopia: An Analysis’ Patrick Danagher criticized the idea that future care provided primarily by robots would necessarily lead to a dystopian state of affairs. Danagher’s blog-post focused on two papers which painted dystopian pictures of the robot care world: ‘Coecklberg’s ‘Artifical Agents, Good Care, and Modernity’, and Sparrow and Sparrow’s ‘In the Hands of Machines? The Future of Aged Care’. Danagher’s argument, as well as the argument of the papers he critiqued, focused primarily on the care of the elderly; which is an understandable focus. As he correctly noted people are living longer and longer, and the older we get the more likely we develop diminished capacities in a variety of different areas. The elderly don’t necessarily need to be cared for, some people in their 90s live independent lives without any care. Nonetheless as we age we typically require care as a function of some form of disability which may be a consequence of aging. Various disabilities in mobility become more likely as we age, and likewise cognitive disorders such as dementia become more likely as we age. So given that humans are living longer and longer finding better techniques to care for the elderly becomes more important.

While Danagher’s focus on the care of the elderly is understandable, the focus is unnecessarily narrow. Other areas where robot help could become a possibility, and which would raise the same ethical concerns is in the care of the disabled in general. Care-bots can be used to help people with intellectual disabilities and people who are disabled as a result of neurological disorders, strokes, car accidents etc. Since the real focus is on the issue of using care-bots to help people who are disabled; focusing primarily on the elderly, not all of whom are disabled seems to be off target.

One of the key arguments made against the use of care-bots was that such care-bots cannot provide loving care for the people whose needs they are supposed to support, and that they are therefore deficient tools (Sparrow and Sparrow p.17). Sparrow and Sparrow make this claim about the deficiencies of the care-bots a central tool of their dystopian thesis:

“In the meantime, robots are clearly not capable of real friendship, love, or concern- only (perhaps) of their simulations. This has a number of important implications for their involvement in caring roles… as we argued above, robots will by-and-and large not be capable of succeeding in aged-care roles where emotional work is an important part of the role” (ibid p. 17)


Sparrow and Sparrow go on to make an even stronger case that since such robots are incapable of actually caring for those they are serving it is morally reprehensible to use them as tools, because it may result in those in care becoming deluded into thinking that the care-bot is a friend. Danagher gave a convincing argument that there is little reason (other than meat chauvinism), to assume that a care-bot won’t eventually be capable of really empathically ministering for those who employ them. While Danagher has a point; the fact is that we have no idea whether we will ever be able to create care-bots that actually care. Rather than engaging in speculation about whether future AI research will ever solve the hard-problem of consciousness; I think we should be asking a different question. The question I think should be asked is; ‘do disabled people actually want their care-bots to care for them?’.

Research in disabilities studies indicates that people may not actually want a care-bot ( or human carer) to act as a friend. Philosopher Eva Feder Kittay in her ‘Learning from My Daughter: The Value and Care of Disabled Minds’ noted; lots of people with disabilities would prefer to view the people who provide their care as professionals doing their jobs, as opposed to people who are in a loving relationship with them:

“Although family members most often fill the role of the paid attendant, the preferences of many disabled, adults, interestingly, do not favor this arrangement. The sense of independence disabled people hope to attain, is according to some, best served by a paid stranger with whom one has a thin relationship of employer to employee instead of the thick relationships of family. In order to sustain a sense of independence-or as I shall suggest, the illusion of independence, the personal attendant must become invisible. In the words of Lynn May Rivas, the carer needs to turn over the “authorship” of her work to the disabled person” ( Kittay 2019 p. 158)


To such people the fact that a robot which is helping them doesn’t care for them would not be a draw back; in fact it would be a positive thing. In fact one of the benefits some disabled people would see in the use of care-bots, would be that the care-bots are tools that they can use to increase their own independence. People with disabilities like everyone else have friends and family, and are not looking for care-bots to fill a caring void but rather want tools they can use to increase their independence as they interact with their world.

Missing from the normative claims about the use of care-bots is asking the disabled and elderly whether they would like to use them or not. Asking them whether they would want to use them would need to be done in tandem with ascertaining the degree of understanding of the nature of the care-bots. Instead of asking such questions we get the opposite course where what they want is ignored and we are given a normative argument that such bots are intrinsically bad independent of what the patient wants. Now such normative arguments aren’t bad per se but they do have the draw-back of ignoring the descriptive wants of the people in need.

Sparrow and Sparrow do note the importance of ascertaining whether people actually want to be looked after by care-bots. They even acknowledge (in a foot note at the back of their paper) that there is some evidence that some elderly people do have positive attitudes towards care-bots. Though there is still no conclusive evidence, positive or negative, about what attitudes the elderly, who need assistance, or other disabled people would have towards care-bots.

Sparrow and Sparrow worry that even if the elderly do have positive attitudes to care-bots this may reflect their ignorance of the nature of the care-bots. To avoid this difficulty they suggest a deliberative process where they can debate the pros and cons of the issue and come to an informed decision before they decide what their views on the topic are. I am sympathetic to such approaches; but there is a difficulty that such a process leaves out a substantial proportion of disabled people, who may be incapable of engaging in this deliberative process.

There are an incredible amount of people with disabilities who have either no linguistic competence, or who have linguistic competence which is so limited that it effectively bars them from engaging in the deliberative process. People who suffer damage to the Brocas area of the brain (as a result of accident or stroke) can sometimes lose their ability to produce natural language sentences. These people still have preference and wishes, even if they can no longer express these wishes/preference through language. Similar considerations apply to various different developmentally disabled people who for whatever reason don’t develop a fully functional language. Despite the fact that they cannot speak they can think and have preferences about the things they like in their lives.

Now people like Sparrow and Sparrow would probably agree with me that it is a pity that people without a language are barred from the deliberative process when it comes to the care-bot issue. But they could add that there is little to be done on the issue; if people are incapable of engaging in public deliberation, they are unfortunately barred from the choice in relation to care-bots.

I would argue that it would be a mistake in excluding non-linguistic people from having a say in who helps provide their care. We regularly discover people’s preferences by looking at their behaviour. A person’s behaviour can reveal a lot about what they prefer doing. In a care setting people without language regularly indicate their food choices by pointing to a picture of the food they want, pointing to the actual food they want; or by handing back food they don’t want. Even with people who have a language we can discover facts about them by looking at their behaviour. Thus some people make claims that they don’t like Facebook because of ethical concerns about what it does with our data; nonetheless they spend hours every day playing on Facebook. Their behaviour indicates revealed preferences that contradict their express verbal pronouncements. In the case of non-linguistic people all we have to go on is their behaviour when trying to discover what they want.

We do not know whether non-linguistic disabled people would prefer to have their needs met by care-bots. But it is possible to find out by using some care-bots to help them on a limited basis and seeing if they find the process helpful. If they choose to interact with the care-bots as much as they do ordinary carers this is an indication that they find using the care-bots useful. Now doing studies to discover revealed preferences in relation to care-bots would be a difficult thing to research; and would involve complex ethical considerations. Furthermore, it may turn out that after the study was done it was conclusively shown that people do not like interacting with them. But what if non-verbal disabled do like being cared for by care-bots?

Sparrow and Sparrow are committed to the view that care-bots are by nature unethical. They note that the care-bots have the potential to mislead people about their nature and hence are inherently unethical. People will erroneously think that the care-bots are their friends; all the while the care-bots will be nothing more than uncaring mechanical devices. As we saw above not all people with disabilities want their carers to be their friends so this mitigates some of Sparrows concerns. It is of course possible that some people will be deceived about the nature of the care-bots. But if the care-bots do end up increasing the quality of life of the people they help, and if the people prefer interacting with the care-bots; one could argue that this out-weighs any concerns of people being potentially confused about the nature of the care-bots. Furthermore it is of course possible that people will be confused about the nature of their relationship with the people who care for them; such concerns are not limited to care-bots. So it would be unfair to rule out the use of care-bots because they have the potential to mislead some of the people they are helping.