Category Archives: Uncategorized

What if? -Should we Forgive Dr Strange?

With the unlimited money of Disney Plus Marvel keep churning out more and more content at an uneven pace in terms of quality. Their latest attempt is an animated series called ‘What if?’ where a narrator who is a kind of multiversal celestial being talks us through a series of possible worlds where reality involves characters we are familiar from the cannon Marvel Cinematic Universe but who live different lives from the ones we know. Each episode is supposed to be a different universe and all episodes are distinct from the Marvel Universe we take as cannon.

In the latest ‘What-if?’ episode we are shown a story about Dr Strange which was very sad and moving. The story involves Dr Strange’s wife being killed in a car crash. Strange cannot cope with his girlfriend’s death and uses the eye of Agamotto (an enchanted stone which gives him some control over time), to try and change the past and ensure that his girlfriend never dies. However, no matter how many times Strange changes the past his girlfriend keeps on dying. The story unfolds with Strange obsessively trying and failing to stop his girlfriend from dying. The culmination is that Strange turns to the dark arts to bring back his girlfriend while he does end up bringing her briefly back to life, she is terrified of the abomination that he has become, and she ends up dying anyway.  Ultimately, the spell which brought her back to life ends up destroying the entire universe, except for Strange, leaving him on his own crying and bemoaning his mistakes.

As the episode ends one is left feeling some sympathy for Strange as he weeps and regrets his actions. But while an immediate sense of sympathy for a character is understandable one, it leads one to reflect on whether the sympathy is warranted. When Strange starts to try and modify time to bring his wife back it first seems to be motivated by love. But slowly it takes the form of an obsession, a dreadful need to control reality that is motivated less and less by love as the episode ends. In one sense the fact that Strange appears to be motivated less and less by love makes him a less sympathetic character. But on another level his compulsive need to control reality reveals a child like side of Strange that is both pathetic and tragic. Strange seems to want reality to unfold precisely as he wills it and one is left to feel some pity for his puerile attempts to play god. But to pity a person is not to justify their behaviour. We can pity a person who is in the depths of alcoholism who drives under the influence and ends up knocking someone down and killing them[1]. The pity can derive from the fact that the person is a pathetic figure who wasn’t fully in control of their actions. However, even if one did feel pity for the drunk driver, most people would still hold the drunk driver morally responsible for his actions. To hold a person morally responsible for their actions one needs to argue that they could have done otherwise. The drunk driver could have chosen to not drink when he was driving, or if he felt compelled to drink, he could have desisted from driving. If the drunk person couldn’t have done otherwise, then he cannot be held responsible for his actions. A determinist who argues that all our behaviours are determined by causal antecedents would hold the view that ultimately nobody is responsible for their actions.

But adopting a determinist perspective on Dr Strange’s behaviour in his fictional world may be in tension with what we know about the Marvel Multiverse. In the episode we are discussing there is a twist where we find out that the sorcerer supreme splits Strange in two (unknownst to Strange), so while one version of Strange obsessively tries to bring his girl friend back to life the other one moves on with his life and is a force for good in his universe. Indeed, the whole premise of ‘What If?’, is that there exists an infinite possible version of Dr Strange all of them living slightly different lives.  Furthermore, we know from the Marvel Cinematic Universe that there exists at least one other Strange whose girlfriend died in a crash and yet this Strange managed to move on with his life without destroying a universe. To some the fact that there are alternative versions of Strange who do not engage in his abhorrent behaviours indicate that he could have done otherwise.

In the under-described and fictional world of Marvel which appeals to magic as well as pseudo-scientific explanation in its world building; there will obviously be no matter of fact answer to whether Strange’s behaviour was determined. However, from the point of view of the real world, our world, there is no logical compulsion that possible worlds or a multiverse imply the existence of freewill. In fact physicist Sean Carroll argues that one version of the multiverse which is supported by a realistic interpretation in Quantum Mechanics is entirely deterministic:

“The right way to think about the causality is “some microscopic processes happened that caused branching, and on different branches you ended up making different decisions,” rather than “you made a decision, which caused the wave function in the universe to branch”…Quantum Mechanics is not necessarily indeterministic. Many-Worlds is a counterexample. You evolve, perfectly deterministically, from a single person now into multiple persons at a future time. No choices come into the matter anywhere.” (Sean Carroll ‘Something Deeply Hidden pp 214-218)

There is obviously no fact of the matter about whether Strange lives in a deterministic universe like our own one. Nonetheless he does make a useful thought experiment on whether we should feel sympathy for people in a deterministic universe. In this authors view a deterministic universe should elicit universal sympathy; as no agent is truly responsible for their behaviour. But responsible or not they get to live their behaviour.

[1] Some would argue that it would be inappropriate to feel sympathy for the drunk driver that all sympathy should be directed towards the innocent victim killed by the drunken driver and the victim’s family. However, it is possible to feel some sympathy for the drunk driver even if most of your sympathy is with the victim and his family.

Forrester’s In The Shadow of Justice a summary and discussion

Below I will briefly outline the central outline of Forrester’s book ‘In the Shadow of Justice’. The primary aim of the blow will be to outline what the central arguments are she presents in each chapter. I will present a more thorough criticism of the book in my next blogpost.

Chapter 1 of Forrester’s ‘In the Shadow of Justice’, focuses on the development of Rawls Theory of Justice. Forrester portrays Rawls firmly as a man of his time who took on the concerns of his time and firmly integrated them into his political theories. In the aftermath of World War 2, with the rise of communism in Russia, western thinkers were concerned with the dangers of totalitarianism.  In Forrester’s view Rawls philosophical system was shot through with concerns of too much state control drifting towards a totalitarian state. Hence, key concern of Rawls’ Theory of Justice was to minimise state control except where necessary. Rawls two principles, (1) to maximise liberty to the degree that one’s liberty didn’t impinge on others liberty and (2) The equality principle: differences in income were only permitted if they differences benefited the least advantaged members of society, was a threadbare structure designed to ensure basic justice while minimizing state interference. Forrester’s chapter seems to be written for an intended audience of analytic philosophers who, to their detriment, sometimes focus entirely on abstract philosophical argumentation and downplay the importance of time and context in shaping theories.

            While Forrester does a good job in tracing the roots of Rawls’s Theory of Justice, she doesn’t go into much detail in evaluating the justification for Rawls’s theory. Forrester details the influence of linguistic philosophy (Wittgenstein’s, Ryle etc) on Rawls’s conception of a society being like a game which consists of rules made by people, but she never questions whether Rawls was justified in adopting this approach. She notes that Rawls attempted to constrain his conception of what a just society could look by grounding his view of human nature in facts discovered in psychology, in particular Piaget’s developmental approach to human nature. While Forrester places Rawls’s firmly as a product of his time in terms of societal trends and philosophical concerns, she fails to do so, when it comes to scientific influences on Rawls’s theory.

            While Rawls obviously cannot be faulted for scientific developments which occurred after he was writing, contemporary scientific, research indicates that Piaget’s developmental story underestimated the innate constraints on human development. Any conception of society models it on an analogy of man-made games will have to face the question of whether the type of games we can play will be constrained by our innate psychology. While Forrester does a good job of tracing the degree to which Rawls is a product of his times in most cases, she doesn’t sufficiently detail the degree to which his scientific understanding was time specific, and how this time specific scientific world-view effects the validity of Rawls’s theory of justice.

Chapter 2  ‘Obligations’ centres on Rawls and his contemporaries evolving views on our obligation to the state. The sixties in America were a time of unrest, and there were protests centred on a variety of different issues, such as civil rights issues, anti-war protests, nuclear disarmament protests etc. Forrester notes that one of the hot button issues philosophers needed to address when considering our obligations to the state is civil disobedience and when it is justified. When appealing to the notion of civil disobedience Rawls modelled it on the concept of fair play. By entering society and receiving the benefits of being a member, people acquire obligations to that society. One such obligation is to not engage in practices that are harmful to that society. A key concept for Rawls when thinking through these issues was the importance of the stability of our society. Rawls argued that if most people had agreed on the principles and constitutions for society then this limited the scope of legitimate protest. Rawls went as far as to argue that even if a person could show on utilitarian grounds that civil disobedience could do more good than harm then this would not justify the disobedience. He argued if people are part of a fair practice and receiving benefits from it then their duty of fair play bound them to abide by the laws even if they personally find them unjust. Forrester notes that Rawls did offer some scope for civil disagreement noting that if a society is deliberately disadvantaging a particular group of people, then they are no longer bound by the duty of fair play.

            Forrester does a good job of showing how philosophers writing at the time disagreed with Rawls views on civil disobedience on the grounds that it proposed a very high bar on when civil disobedience is justified. And she does a good job of situating Rawls philosophical in their historical context, which is something which isn’t always done when discussing philosopher’s philosophical views.

             The primary question discussed in this chapter is when is civil discontent justified? It is a timely question and one that is obviously pertinent today. The protests in the wake of the murder of George Floyd which were screened all over the world paint a visceral reaction in most people. And people’s response to the riots have divided people in America and across the world. Rawls’s general principles seem very disembodied and abstract, but they do offer proposals for when civil dissent is justified. However, as Forrester correctly notes the bar set by Rawls for legitimate civil dissent is extremely high and does rule out a lot of dissent that citizens would argue is justified.

Chapter 3 ‘War and Responsibility’, focuses on Rawls and his fellow philosopher’s reaction to the Vietnam War. The Vietnam war raised several philosophical issues. In the wake of the war some philosophers wanted to thread a line between the extremes of Utilitarian justifications of any war crime on pragmatic grounds, and passivism which condemned any war act no matter how justified. The backdrop of these philosophical discussions were war crimes in Vietnam such as the My Lai massacre which involved the mass murder and rape of hundreds of innocent Vietnam citizens. While the My Lai massacre was an obvious case of a war crime; philosophers wanted to find a set of moral principles which could deal with more ambiguous war acts. One attempted principle was the doctrine of double effect which roughly put states that if a person does something which is morally good, but which has an unintended side effect which is bad, we can judge the persons act as ethically ok. Forrester describes Anscombe’s explication of the doctrine of double effect and outlines the responses of philosophers such as Philippa Foot, and Rawls to the argument. And she notes that Rawls found the doctrine somewhat compelling as an alternative to Utilitarianism. Furthermore, Rawls found the argument convincing as it gave him a way of avoiding the supposed extremes of passivism and utilitarianism. Nonetheless despite Rawls being impressed with the arguments for the double effect doctrine he claimed that the best way to explicate the limits of a just war was to argue from the Original Position. Thus, Rawls noted that no rational agent would opt for a genocide option when reasoning under the veil of Ignorance.

            The central focus of the chapter is on how philosophers tried to rise to the challenge of the Vietnam war and the horrific acts that occurred during the war. She notes that philosophers tried to deal with the ethical challenges raised by the war using a series of thought experiments designed to help us think clearer about the ethical issues it raised. She did an excellent job in explicating the various positions philosophers took on the ethical issues. However, the central subject of her book John Rawls played less of a role in the chapter than he did in other areas of the book. The chapter could have benefited from her going into more detail on Rawls evolving views on the ethical challenges posed by war.

Chapter 5 ‘Going Global’, details the various attempts by philosophers influenced by Rawls to deal with problems of global justice. Forrester details the responses of three Rawlsian Philosophers (Brian Barry, Charles Beitz and Thomas Pogge) to dealing with problems of global justice. Forrester outlined the fact that Singer’s 1972 paper ‘Famine, Affluence and Morality’ which was a utilitarian attempt to sketch our global duties to each other. Rawls had been very critical of utilitarian philosophy in general and his Theory of Justice was very critical of utilitarian conceptions of justice. With criticisms of Rawls Theory of Justice abounding and Singer presenting a utilitarian account of international justice; Rawlsian philosophers set out to explicate how Rawls theory could be developed to cope with problems of international justice.

            Rawls had been sceptical about whether his Original Position could be scaled up to a global scale. He thought that it was psychologically implausible that people across the international community could realistically comply with the principles. He argued that in order to realistically expect people to comply with any principles agreed to in the original position they would need to be from a shared community. Charles Beitz argued that Rawls was incorrect in this assumption and that if Rawls really accepted this assumption, then he would find it hard to explain how his Original Position would work for people who lived in large states. Beitz argued that once in the Original Position in an international arrangement people would agree that redistribution was necessary from poor to rich states.

            Beitz’s Rawlsian position was criticized from a number of directions; some philosophers noted that Beitz’s position because it abstracted from history, was guilty of being unable to handle the facts that some nations are poor because of their exploitation by other nations in the past. While others such as Brian Barry noted that Beitz’s theory had the conspicuous failing that he never told a plausible political story of how we would convince richer nations to redistribute their wealth.

            Forrester tells a story of philosophers beginning to deal with international justice in concrete terms before becoming more and more abstract and concerned with internal consistency and the world fell from view to Rawlsian Philosophers. I think she made her case very well and went into good detail in exploring the various philosophical attempts to make Rawls theory of justice cross international boundaries. However, since her ultimate focus was on how Rawlsian philosophers became so abstract in their theorizing they lost sight of the world, I think her case would have been helped by contrasting the Rawlsians with Utilitarian Philosophers who were becoming more and more concrete in their thinking at the same time Rawlsians were losing sight of the world.

Chapter 4  ‘The New Egalitarians’ tells the story of the publication of Rawls theory of Justice and its reception by his critics. Several of Rawls critics on the left argued that his conception of man in the Original Position was unrealistic and influenced by the concept of homo-economicus. They further argued that his distributive principle offered little practical advice on how to implement it. They noted that Rawls theory relies on the rich being benefactors for the most disadvantaged but does not tell us how convince rich people to depart with a proportion of their wealth and redistribute them to the poor. Having sketched a number of objections to Rawls from the left Forrester goes outline criticisms from the right in the form of Robert Nozick’s famous criticism of Rawls in his Anarchy, State and Utopia.  Nozick argued that Rawls principles of redistribution were unjust because they ignored the historical conditions in which wealth was acquired. For Nozick people have rights independent of any social contract and the historical process of acquiring one’s wealth and property gives us rights that should not be overridden by any social contract theory. Forrester argues that the challenge set by Nozick’s criticisms led to a series of philosophers self-consciously identifying as Rawlsians. Forrester claims that there are three principles that self-described Rawlsians seemed to share: (1) The importance of basic structure, (2) An egalitarian commitment (3) Lack of sensitivity to historical arguments (ibid p. 130)

            The Rawlsian argument that involved abstracting away from historical contingencies didn’t just draw ire from people on the right like Nozick, for many on the left Rawls ignoring history was very problematic. In the era of civil-rights disputes activists noted that black people had been systematically disadvantaged throughout the ages, through slavery and institutional racism. A proposed resolution for this unjust state of affairs was the notion that reparations should be paid to those whose lives have been systemically made worse because of these historical facts. Forrester notes that because of the conceptual features of the Original Position that Rawls sketches he hasn’t got the resources to deal with things like reparations and his redistribution position is blind to considerations like reparations. Overall, Forrester does a good job of outlining the reaction to Rawls book by the left and the right. And she made an interesting case that it was a reaction to Nozick’s criticisms that lead a lot of philosophers to self-consciously identify as Rawlsian philosophers.

Chapter 6 of Forrester’s ‘In the Shadow of Justice’ focuses on philosophers who were influenced by Rawls (either positively or negatively), and their attempt to understand what our ethical obligations to future generations were. After the publication of the Ehrlich’s 1968 best seller ‘The Population Bomb’, the issue of what our duties are to future generations became prominent for philosophers. Rawls 1971 ‘A Theory of Justice’ didn’t appear to have the conceptual resources to deal with the issues raised considerations of our obligation to the future.

            A key question of the original position was who was in it. Rawls had argued that subjects in the original position represented people from all areas of society. However, he argued it would be psychologically impossible to include parties from all generations in the original position. But Rawls didn’t want people in the original position think only of the present moment and to entirely discount the future. For this reason, he inserted two arguments into his Original Position (1) People behind the veil of ignorance were deprived of knowledge of what generation they were from, (2) People behind the veil were part of a family line stretching into the past and the future. Rawls believed that having people as members of family lines would make them sympathetic to people a couple of generations down the line. While not knowing what generation they were from would temper them from being too parochial in their thinking.

            Forrester details several critics reactions to Rawls attempt to deal with the problem of the future. One major criticism sprung from Hardin’s conception of the future as being one which will inevitably lead to population crises, famine etc. Rawls famous circumstances of justice made it of a criterion of the principles of justice that there is ‘moderate scarcity’ in resources. Having moderate scarcity as a criterion meant that Rawls principles of justice would not be binding an any future like the lifeboat story Hardin told where severe scarcity ruled. Forrester goes on to note that philosophers influenced by Rawls tried to handle difficulties raised by his principles of justice by focusing on different aspects of his theory to save it from criticism.

            Forrester does a good job of surveying the different reactions to Rawls theory of justice in relation to problems of duties to future generations. She also manages to clearly demonstrate how by emphasising different aspects of Rawls theory different philosophers were capable of shaping his theory for different purposes.

Chapter 7 ‘New Right and Left’ focuses on philosophical reactions to the rise of the right in both Britain and America. With Thatcher as prime minister of England and Regan as president of America the philosophy of free market capitalism was very much in vogue. This free market capitalism with its emphasis privatisation of public goods and the confident belief that the market could magically fix all ills was key belief of a substantial amount of people in power in the late seventies and throughout the eighties.

            Forrester discusses how philosophers on the left and right reacted to the above political developments. Philosophers like Dworkin reacted to the rise of the right by a social democratic theory that is decoupled from its association with the labour movement. Dworkin was cheerleading the movement of the labour party away from its leftist origins towards a more centrist position. With his thought experiment about people stranded on an island Dworkin argued for a basic structure where people would agree on trade solutions and a kind of social insurance picked from a veil of ignorance. When behind this veil of ignorance, the people do not know whether they have a disability or a marketable talent. Each person can buy insurance against the risks they take. Once out of the veil of ignorance they either win or lose depending on the level of insurance they have and the marketability of their talents. But at this stage any risks they take and whether they win or lose is up to them. Forrester notes that Dworkin’s notions of responsibility, choice and markets were ideas shared by people on the right.

            She goes on to argue that philosophers on the left began to incorporate a lot of Dworkins ideas of responsibility, choice and markets and these ideas became a kind of framework for centrist thought. She ends her chapter with a discussion of how Rawls tamed challenges to his views from Marxist perspectives and domesticated analytical Marxists to quasi Rawlsians. Ultimately, she argues that leftist politics morphed into a Rawlsian redistributive paradigm working within market forces and not really challenging them.

            Forrester does an excellent job of telling how philosophers reacted to political developments in the late seventies and eighties. It would have been interesting to tell the story from the opposite end and discuss how the market forces shaped the type of stories these philosophers felt comfortable telling.

Chapter 8 of Forrester’s book explores a sleuth of philosophers who were very critical of Rawls. She uses these philosophers’ criticisms of him to try and tell a story of how Rawls brand of liberalism fell out of favour. Forrester notes that (1) Rawls is regarded as one of the best and most influential philosophers this century, (2) He is a paradigm exemplar of liberal philosophy, (3) Liberal philosophy not very respected in contemporary political discourse by a substantial proportion of the population. A key aspect of chapter 8 is to try to explicate Rawls liberalism as a product of its time and to explain how liberalism fell out of favour.

Forrester draws on the work of a dozen critics of Rawls liberal philosophy in her chapter 8. Philosophers as diverse as Stanley Cavell, Alister Macintyre, Charles Taylor and Richard Rorty attacked Rawls liberal philosophy as relying on an aminic notion of the self and an atomised conception of the self that ignored the social world of engaged active agents.  As Rawls philosophy developed over the years, he came to stress Kantian aspects of his philosophy. To his critics Rawls views of the self with its emphasis on rules and rational agents downplayed the importance of the lived world. Forrester explicates Bernard Williams attack on Rawls rule-based conception of justice as being psychologically implausible. She also does a good job of detailing other philosophers such as Cavell and Rorty who use evidence from art and psychoanalysis to demonstrate that Rawls conception of the self and is implausible and this fact undermines the whole theory of justice which is built on this conception of the self.

            Forrester notes that there is a great irony in the criticisms levelled at Rawls in that he at times held similar views to those of his critics such as Williams and Cavel. In his 1963 ‘The Sense of Justice’ Rawls sketchs a moral psychology which he argues is consistent with the choices that would be made in the original position. While Rawls conception of the developing self is told in abstract academic tones, he does take on bord considerations from theorists such as Freud, Piaget. In short Rawls sketches three kinds of guilt (1) authority guilt, (2) association guilt, and (3) principle guilt (The Sense of Justice p.100). Rawls conception of the child developing his moral principles based on feelings of guilt tells a story of the self that is not quite as abstract and disembodied as critics like Cavel and Williams suggest.

             A major difficulty in her treatment is that because she discusses so many theorists, she doesn’t give herself enough space to delve into the argumentative structure of Rawls’ major critics. Aside from her quick treatment of Rawls critics I would argue that the narrative she was trying to tell of critics of Rawls leading to defanged liberal philosophy blinded her to liberal critics of Rawls whose views are eminently practical such as Nussbaum and Singer.

Consciousness from Descartes to Ryle by David Berman

David Berman has a new book out on Consciousness. He describes it as follows:

“My book, Consciousness from Descartes to Ayer contains my fullest and clearest account of what I take to be the dualist and monist types. 

The book consists of four chapters and an Epilogue.  Its main thesis, put forward in chapter one, is that all humans have one of two basic forms of consciousness, one that is monistic, the other is dualistic. Yet it is universally believed that all humans have one basic and natural form of consciousness. This is what I call the Assumption, which I take to be profoundly untrue. Yet I do accept that all humans have a mode of consciousness that has arisen through nurture and is a mixture of the monistic and dualistic in the way that green is a mixture of yellow and blue.  The greenish mode of consciousness is what I call socio-linguistic, because it is constituted by language and required for living in society. And it is this socio-linguistic mode of consciousness that has largely occluded the monistic and dualistic. 

Chapter one is foundational, and the following three chapters build on its foundation.  Chapter two, entitled ‘Irish Philosophy: Past and Future’, begins with an account of the one period when Ireland was at the cutting edge of world philosophy. This was the golden age, which was born with John Toland’s Christianity not Mysterious (1696), grew with the answers to his challenge, culminating in the work of Francis Hutcheson, Edmund Burke and especially George Berkeley, and then came to a close in the late 1750s. One thesis of this chapter is that Ireland might again be at the cutting edge if it follows the lead of its two greatest thinkers, Toland, the great monist, and Berkeley, the great dualist.  But the more important aim of this chapter is to display the two basic types as exemplified by Berkeley and Toland. 

Chapter three is on David Hume, another great monist, and his use of what I call the Art of theological lying, as found most clearly in his celebrated account of miracles. However, my more important and controversial thesis, which is the main focus of this chapter, is that Hume also uses the Art in his alleged retraction of his monistic theory of personal identity, according to which theory a person is only a bundle of perceptions with no perceiver of these perceptions.  In short, what I try to show is that Hume’s alleged retraction of his monistic theory in the Appendix to his Treatise was not sincere but Artful and strategic lying. And this is both explained and supported by my account of monism and dualism in chapter one. However, if my main claim in chapter three is accepted- which I think it should be, since powerfully supported by the available evidence- then that would show the power of typal theory developed in chapter one.  

Chapter four is chiefly on A.J. Ayer, who, like Hume, was a monist. But while Ayer followed Hume in many ways, what I try to show is that he can be considered a clearer and more complete monist than Hume.  Another aim of this chapter is to show how much tangle and confusion there is in 20th century philosophy, but how the tangle can be untangled once we see the importance of the dualist and monist dichotomy as described in chapter one. 

In the conclusion of this work, the Epilogue, I address what, it is generally agreed, distinguishes human beings from other animals, namely that we are the most social animal, which is shown in our ability to live and operate in huge numbers.  But whereas most biologists and anthropologists trace this to our genetic makeup, I trace it to a cultural development which occurred circa 30,000 to 70, 000 years ago, when Homo sapiens learned to speak, that is, understand and teach symbolic language.  This is the kind of language which no other animal has acquired; for their use of language is limited to signs, such as that of danger, which is largely instinctive and genetic. Whereas symbolic language, which we human beings alone have, is and was a cultural acquisition that must be taught and learned if it is not to be lost.  And this cultural acquisition I argue, drawing on the 2010 breakthrough in evolutionary genetics and my account of basic types in chapters 1-4, arose from the successful mating of Homo sapiens with the Neanderthal 30,000 to 70,000 years ago and, more specifically, the (temporary) emergence of hybrids.”

Verbal Behaviour and Free Speech Absolutism

                                           Free Speech Absolutism

In political debates across the internet the issue of free speech has taken on a central role. It is primarily (though not exclusively) people who identify on the centre or right of the political spectrum who are concerned with this issue. The worry they voice is that liberal types in big tech, or other corporate organisations and University admins are sacrificing free speech to appease the PC or so called ‘Woke’ brigade. To address this worry they point to what they think of paradigm as examples of people’s speech being stifled; such as Professors being told they will lose their job if they don’t refer to people by their preferred pronouns, or people being banned from Twitter or YouTube for purported hate speech. The free-speech absolutists take it as a mantra that free-speech is a fundamental right that should not be overrun no matter the cost. Pod-caster Joe Rogan who is a member of a group called the IDW which is obsessed with the notion of free speech, has discussed this issue on many occasions. Rogan holds the view that the solution to hate speech isn’t banning people from public platforms, rather the solution to hate speech is more speech countering it.

Not all theorists who worry about free speech hold the wild view that any speech should be allowed no matter the consequences. Thus, a substantial proportion of the IDW supported Jordan Peterson threatening to sue people who wrote articles critical of his work. Their view was that this wasn’t a free speech issue but rather it was an issue of defamation of character; hence it was legal issue. I won’t get into the issue of whether Peterson’s use of lawyers was justified from a legal perspective, I will just note that those who support Peterson on this issue do agree that there should be some legal constraints on the notion of free speech. Such people presumably wouldn’t support a Rogan type figure who said that the solution to defamation is to simply state why you disagree with what the person said; as opposed to making defamation illegal.

The notion of speech being constrained for legal reasons should not seem overly objectionable to most people. Speech is a form of behaviour, and like all other behaviour, it is constrained in various ways for various reasons. Driving a Car is a form of behaviour. This behaviour is severely constrained for a variety of different reasons. When driving you are only allowed to drive on one side of the road, you cannot drive through a red traffic light, and you can only drive at a certain speed. Walking is a form of behaviour. I cannot just walk anywhere I want. If there is a sign outside a house that says: ‘Private Property: Trespassers will be prosecuted’, then I cannot just walk into the house. My behaviour of walking and driving are subject to many legal constraints. Likewise, my verbal behaviour is subject to many constraints legal and otherwise. A clichéd example is the example of shouting ‘Fire’ in a busy theatre, when there demonstrably no fire and causing a riot. This is a form of verbal behaviour that will result in getting one banned from the theatre. As we discussed above defamation of character is a form of verbal behaviour that will lead to you getting sued if it is proven you engaged in it. Part of living in a complex society means that a lot of your behaviour (whether Verbal or not) is constrained for a variety of different reasons.

Given these obvious facts one wonders why the debate is framed as it is. Why would one side call themselves free speech absolutists? Why would the likes of Rogan argue that the proper response to hate speech is better speech? Is the proper response to people driving on the wrong side of the road to set an example by driving on the right side of the road? All behaviour has consequences and hence there are rules in place that constrain some of our behaviours. So why then do people appear to be more worried about free speech in the case of verbal behaviour than they are with other kinds of behaviour? I think that the reason people have such worries is because of the spectre of Orwell’s 1984.

In Orwell’s 1984 we are told of a totalitarian world where the government has strict rules on the way people think and the aim of the government is to make thoughts critical of their regime unthinkable. This nightmarish world presents a terrifying picture of reality. Few people agree with Orwell that language determines reality to such an extent that governments banning words like ‘Freedom’ from public discourse will stop people from being able to think about freedom. Nonetheless as power gets centralized into massive corporations who hold more power than most world governments do; one can understand why people may be fearful about such people using their power to control the topics we are allowed to speak about. If Twitter or YouTube ban certain topics from even being discussed people some people feel that we are veering towards a state of affairs where a handful of tech billionaires get to decide what people are allowed to say about certain political topics.

The Orwellian worries go even deeper than just worries about whether tech companies are banning people for certain speech acts. The worry is also that these tech billionaires are trying to implement a type of Orwellian thought control where you get people to accept certain patently false statements. In 1984 the party forced Winston to believe that 2 + 2=5 for no other reason than to establish dominance over him and divorce him from any other reality than the reality of big brother. Some free speech absolutists draw a line in the sand, they hold certain truths as immutable facts, or ground works that cannot coherently be questioned.  One such line in the sand free speech absolutists draw is on the issue of biological sex. These Orwellian worries take the form of arguing that tech billionaires, university admins, and corporations of all sorts are trying to force people to deny obvious biological facts. Why are these corporations supposedly trying to control our thoughts? The answer to this question depends on which conspiracy theorist you talk to. One set of conspiracy theorist will argue that the corporations are being forced to do this by ‘woke’ people who are unknowingly in the grip of Neo-Marxist ideology. To another set of conspiracy theorists the aim is to get people to lose grip of any objective reality and give those in power the ability to control what you can speak of and destroy any answerability to the world. In this nightmare world the only thing you are answerable to is those in power.

So called Free Speech Absolutists range from people who think that some of our verbal behaviour may be subject to restrictions for various reasons in the same way that all other behaviour is subject to constraints, to conspiracy theorists who think that attacks on our free speech are part of some Orwellian Plot to control us and divorce us from any concept of reality. In addressing these views, it is important to consider how free is our free speech is in general, independently of the contexts free speech absolutists typically care about. In ordinary conversations does the notion of free speech play much of a role.

                           How Free is Free Speech?

“One fundamental contribution of what we have been calling “Cartesian Linguistics” is the observation that human language, in its normal use, is free from the control of  independently identifiable external stimuli or internal states and it is not restricted to any practical communicative function…The limitless possibilities of thought and imagination are reflected in the creative aspect of language use. Language provides finite means and infinite possibilities of expressions constrained only by rules of concept formation and sentence formation” (Chomsky ‘Cartesian Linguistics’ p. 77)

Linguist Noam Chomsky has noted that Language has some curious properties. These properties should be acknowledged in any discussion of Free-Speech. A key feature of language is linguistic productivity which relies on a basic set of concepts and a grammar with a recursive device (the ability to embed concepts within concepts), by using these tools humans have the capacity to construct a potentially infinite amount of sentences. It is not just that we can construct a potentially infinite number of grammatical sentences, we also manage to construct a potentially infinite number of sentences that are coherent. Chomsky gives a famous grammatical sentence ‘Colourless green ideas sleep furiously’ while the sentence is grammatically well formed it has no meaning, and the idea of green ideas sleeping furiously does not seem to be coherent. Most sentences humans speak are coherent. Grammatical but incoherent sentences are very unusual in the normal linguistic community. Aside from sentences typically having combinatorial productivity, and coherence they also typically have appropriateness to the situation. If two people were sitting at a table and one of them proposed to the other. There are many different responses that could be offered. Such as accepting the proposal or saying no I am not ready for the marriage. But if a person responded to the proposal by saying ‘No black scorpions are falling on this table’, most people would judge this a very inappropriate response. Normal human sentences are typically appropriate to the situation in which they are spoken.

            Linguistic productivity, coherence, and appropriateness to the situation are indeed key features of language. However, Chomsky goes even further than just pointing out these three truisms. Chomsky argues that these capacities are un-caused by environmental factors and social reinforcement or by internal factors ( Cartesian Linguistics p. 77). He goes on to further argue that it is a mystery how we manage to use language freely and appropriately.

            Chomsky’s conception of language as an instrument of thought that we can freely use is one that is congenial to so called free speech absolutists. It is also congenial to the individualistic way Western people have been encultured. While there is much in Chomsky’s account that is very persuasive, his emphasis on behaviour being uncaused is very strange. In his ‘Criticism of B.F. Skinner’s Verbal Behaviour’, Chomsky lambasted Skinner’s use of the notion of stimulus control. Chomsky gave an example of a person viewing a picture in a museum. He noted the absurdity of thinking that every time the person saw the picture they said ‘picture’ or ‘art’ etc. Chomsky was of course correct to note that words aren’t elicited in a crude stimulus response manner. Nonetheless he was incorrect in his belief that Skinner was proposing a stimulus-response account of word learning, and he ignored the fact that Skinner emphasized multiple sources of control of our use of words. I won’t today go into the long complicated debate between Skinner and Chomsky, I will just note that both of them are making contradictory assumptions about role the environment plays in the causation of verbal behaviour and neither really manages to justify their a-priori intuitions[1]. Skinner assumes that our language is under constant environmental control (and control from the environment our ancestors evolved in through genetic constraints), but never demonstrated this fact. Chomsky argues that our ability to coherently an appropriately use language is a mystery which will probably always remain beyond our ken an he even speculates that this ability may not be causally explicable through either internal factors or external factors.

            We won’t solve the debate between Chomsky and Skinner on this topic but I think it is worth just thinking through the notion of verbal behaviour and ways that it may be constrained by environmental and psychological contingencies. Thinking through these different constraints may help us understand whether the notion of free speech absolutism makes sense.

            In ordinary every-day conversation we implicitly police the type of speech we use. People do not speak about the same things or in the same way when conversing with a very young child, as they would when talking to another adult. A plausible reason why we speak in this way to young children is that they prefer that type of speech so give us positive feedback when we engage in it (Fernalt 1985). Furthermore if we spoke to an adult using motherese they would more than likely find it irritating and condescending and a normal human would pick up on cues from the adult that they don’t find the speech tone appropriate. To some degree the way we talk to a person (whether an adult or a child) is constrained by environmental feed back such as how they react to the speech.

            Even in simple conversational things such as slips of the tongue we sometimes indicate a type of implicit policing of our verbal behaviour. A lot of Freud’s theories have been discredited and his reputation has probably never been lower. However, his two books ‘The Psychopathology of Everyday Life’, and his ‘Introductory Lectures on Psychoanalysis’ offer some plausible speculations as to why we make slips of the tongue. These speculations aren’t scientific but they do offer a mirror into the types of factors that sometimes influence us making slips of the tongue.

            Freud tells a story about him and his children staying with a family friend in a summer resort. The host was anti-Semitic and was ranting a series of horrible things about Jewish people. Freud, who was of course Jewish, wanted to reprimand the woman and correct her false claims about Jewish people. However, he didn’t want to start an argument and have to go find another place for him and his children to stay in. Freud’s discretion won out and he decided not to confront his host. But he decided to get his children out of the way of the hosts anti Semitism by asking the children to go to the garden. He turned to the children and said:

“Go into the garden, “Juden [Jews]”,  quickly correcting it to Jungen [youngsters]” (Psychopathology of Everyday Life p. 134)

A plausible interpretation of the events is that while one part of Freud wanted to admit to being Jewish, this part lost out because of other pragmatic issues. However, the part of him that wanted to admit being Jewish still managed to insert it into the conversation via a slip of the tongue. One doesn’t have to buy into Freudian Psychoanalysis to see that our every day conversations are shot through with our own pragmatic (largely unconscious) verbal behaviour and don’t just act like free speech absolutists. 

            In his Verbal Behaviour Skinner makes a similar point:

       “One of the two possible responses is differentially selected because the other is also to some extent punishing when the speaker uses a “euphemistic” expression. The euphemistic response has fewer aversive effects upon the speaker, either directly or indirectly through the listener.”  (Verbal Behaviour pp 235-336)

In Skinners example we use euphemisms because we have been trained that out right insults typically have punishing consequences. Again, we can see self-policing of verbal behaviour as people try to pragmatically make their way through life.

            This self-policing also occurs in jokes and again Skinner has an interesting quasi scientific take on jokes. I don’t entirely agree with Skinner’s take as I think incongruity resolution theory is a more comprehensive theory of jokes. However, I think that Skinner’s take is accurate to some cases of humour. Skinner makes the following point:

“Freud has emphasized the fact that witty responses are (a) automatically reinforcing to the speaker, (b) punishable by the listener or community. Humour is preoccupied with tabooed subjects, in particular sex, and with having aversive effects upon the listeners or others. Freud argued that wit permits the “release” of repressed responses, but the point can be made by saying that the response receiving supplemental support is weak because of punishment. Both interpretations miss an important point…The aggressive nature of the remark no doubt accounted for much of its strength; the function of the wit was to make an aggressive response unpunishable. But it is not enough to say that the speaker could appeal to the “harmless meaning” in a legalistic extenuation ( I was only adding a rather thoughtless conversational remark) because the “aggressive meaning” ( you may not be on the throne for long) was clear to everyone. Rather we have to appeal to a particular characteristic of the witty verbal community. Just as the literary community tolerates weak determiners of strength, so the witty community exacts a quid pro quo for otherwise offensive behaviour. It is almost as if the community had agreed: you may be aggressive provided you are amusing. (ibid pp.287-288)

Skinner’s account of humour is pretty basic, but it is on the money for a lot of cases. We have all probably used humour to lampoon people knowing full well that if we said the same thing in a serious way we would be punished. It seems reasonably plausible to argue that humour is another type of verbal behaviour that we engage in self-policing for a variety of pragmatic reasons.

            I don’t want to belabour the point too much, but Chomskian worries aside, most people will admit as obvious that our verbal behaviour in ordinary life is caught in a causal sequence where some verbal behaviour is deemed appropriate, and some inappropriate, and this dictates how we typically speak.

                                                    Concluding Thoughts

The purpose of this essay was merely to point out that our verbal behaviour is subject to societal constraints in the same way as all our other behaviour. Some of these constraints are self-imposed for the purposes of pragmatic interaction with each other, and some are legal for the purpose of running an effective and moral society containing a complex network of agents. In pointing out that the idea of Free-Speech absolutism makes little sense in terms of how language is typically used I am not giving censorship a free pass.

            Above I noted that Verbal Behaviour is a form of behaviour like all other behaviour. We have laws for how-to drive-in traffic which help us have a functioning travel system that is safe for citizens. Accepting constraints on driving behaviour doesn’t commit me to accepting absurd constraints on all behaviour; I wouldn’t agree with a constraint which involves imprisoning people without trial for criticizing the government. I look at verbal behaviour the same way. If a person posts details of how to make a deadly poison on Twitter and encourages their followers to make the poison and use it on certain people; then I think that person should be banned. But while I disagree with a lot of what Jordan Peterson says I think trying to ban his books or ban him from Twitter is absurd. I purposely chose Skinner, Freud and Chomsky to make some of my points. All three of them have at various points throughout their career had people try to ban their books as dangerous abominations. I hope it goes without saying that I am not comparing Peterson with Skinner or Chomsky in terms of intellectual achievements. My only point is that with someone like Peterson the proper course of action is to criticise his works on intellectual grounds (and hope he doesn’t sue you 😉 ) banning his books would be absurd.

            There are plenty of intermediary cases which are much more complex than banning people proposing mass murder and banning books with scientific theses you don’t agree with. I don’t have the answer to all of these cases. I would ask people to think of these cases in terms of embodied agents negotiating their way in the world and trying to minimize pain in as pragmatic a manner as possible. Thinking of free speech as some God given right of a disembodied Cartesian intelligence risks us losing touch with the world and the difficulties of negotiating it pragmatically.

[1] Note the issue of causal accounts of verbal behaviour is not a debate about the existence of the language faculty nor is it a debate that rests on accepting Skinner’s behaviourism. Pinker argues for a language instinct, is a critic of behaviourism but would in no way accept the assumption that our linguistic behaviour is uncaused. I just focused on Skinner and Chomsky above because of their very clear and contradictory intuitions.

Phenomenology, ABA and Challenging Behaviour

A key aspect of contemporary naturalistic analytic philosophy emphasises the fact that we cannot go directly to reality. Our grasp of reality is conceptually mediated. We do not have direct access to the thing in-itself. Our understanding of reality is of necessity mediated by our conceptual models. Naturalistic philosophers who argue against the Myth-of-the-Given (the idea that we have unmediated access to sensory data or the physical world), don’t all deny that we can refer to objects that exist outside of our conceptual scheme. Quine, for example, made heavy weight of observation sentences, sentences which hook onto aspects of the mind independent world. Even Quine’s biggest fans would acknowledge that he never managed to fully make clear the notion of an observation sentence and its relation to other sentences in our conceptual scheme. Philosophers of a more cognitive bent have tried to fix reference via internal cognitive structures with the role of conceptualization plays in fixing reference is still debated (See Fodor and Pylyshn’s 2014 criticism of Tyler Burge). Despite some impressive evidence from neuroscience; our best scientific theories still support the notion that our purchase on the world is gained in a theory laden way and we have no reasonable way of ripping our minds off and seeing the world as it is.

            Given this fact, I think it is worth reflecting on how the notion of observation is used in other schools of thought. In popular lore Phenomenology studies reality as it reveals itself to us in experience. On this caricatured view of phenomenology, we have experiences which are unmediated by conceptual schemes and it is the job of the philosopher to study these experiences prior to conceptualizing them. At the other end of the spectrum we are told observation plays a key role in behavioural science. Common lore has it that behavioural scientists refuse to acknowledge intermediate cognitive structures because they cannot be directly observed. Direct observation is the paradigm for the behavioural scientist; or so popular science tells us. Based on these two popular stories one could conclude that both behavioural science and phenomenology while approaching reality from different perspectives still agree on the same misguided premise; that we can directly observe reality. A critic who accepted the above caricatures could argue that both traditions haven’t kept up with developments in neuroscience, cognitive science and analytic philosophy.

            In this blog-post I will argue that the concept of observation used by both behavioural scientists and phenomenologists is much more sophisticated than the usual caricatures spouted by their critics. After explaining how both disciplines use the concept of observation I will then contrast them arguing that the behavioural notion of observation and measurement is incomplete and can be improved on by using insights from phenomenology.

            In tackling this project there is a danger that I will be applying labels in too broad a brush stroke. Phenomenologists from Husserl, Heidegger, Ponty and Derrida all start with radically different presuppositions. Likewise, in the field of Behavioural Science people like Watson, Skinner, Tolman, and Hayes all held widely divergent philosophical assumptions about the nature of the field. To avoid painting in too broad a brush stroke I will limit my approach to phenomenology as it is represented by Merlearu-Ponty and behavioural science as it is practiced by contemporary Applied Behavioural Analysis. By focusing on these different techniques and how they are used in the field I hope to be able to give a more grounded conception of how they use the notion of observation. In the final section I will draw some connections between behavioural science, phenomenology and their different ways of dealing with observation.

                          Behavioural Science and the Concept of Observation.

“Not to define precisely and to measure these behavioural excesses and deficiencies, then, is a fundamental error, it is akin to the malpractice of a nurse who decides not to measure vital signs (heart rate, respiration rate, temperature, and blood pressure), perhaps arguing that she is too busy, that subjective estimates of vital signs are quite adequate, the vital signs are only superficial estimates of the patient’s health, or that vital signs do not signify the nature of the underlying pathology. ( ABA p. 95)

While in the scientific community behavioural science is considered an outdated position, which was refuted by Chomsky half a century ago. In the applied sciences it is flourishing as a field. Applied Behavioural Analysis[1] focuses on behaviour modification and their chief clientele is people with autism and people with intellectual disabilities. They sell themselves as the only form of therapy that achieves predictable results which can be measured scientifically. Hence, they focus heavily on the notion of observation which they argue is the only way to guarantee that you can measure behavioural change.

            In ABA the emphasis isn’t so much on observation as on measurement of what is observed. And measurement is important for the ABA practitioner not because they want to sketch an epistemological theory of how we know what we know. Rather they argue that it is only through measurement we gain accurate descriptions and prediction and control of our subject matter. Without measurement our theories will be nothing more than subjective hunches. The key for the ABA practitioner is to gain predictive control over the subject matter they are studying. So rather than just relying on passive observation one is led down the route of using careful measurements of behaviours, so any behaviour change that occurs because of treatment can be precisely quantified.

            A behaviour is not just measured in isolation. So, for example, consider a form of self-injurious behaviour where a person is hitting themselves on the jaw. The aim of measuring this behaviour is to get answers about whether there are functional relations between the behaviour and environmental contingencies (ABA p. 94). To do this you need to ascertain when the behaviour began; what the antecedent events were that preceded the behaviour, the duration of the behaviour, the intensity of the behaviour, whether the behaviours varied or changed as a result of treatment. Furthermore, if you keep a detailed measure of anytime a particular behaviour occurs, you will be able to establish the possible effects of different environments on the intensity, duration and likelihood of the said behaviour occurring.

            ABA practitioners note that because behaviour occurs over time it has three primary qualities. (1) Countability: Instances of a response class can occur repeatedly through time. Hence, they can be counted. (2) Temporal extent: every behaviour occurs during some amount of time (which can be measured). (3) Temporal Locus: every behaviour occurs at a certain point of time with respect to other events (ibid p. 95).

            As a form of therapy ABA emphasises measurement because it is the only way to guarantee the efficaciousness of the treatment. As you cannot argue that a treatment led to certain problematic behaviours decreasing, unless you are able to quantify the how often the behaviour occurred prior to the treatment, and how often it occurred post treatment. While ABA’s emphasis on measurement is laudable and is a way of keeping honest about what you are achieving; there is a sense in which it gives an air of objectivity to your treatments that is not fully warranted.

                              Phenomenology and Intellectual Disability

“We must return to the social world with which we are in contact through the simple fact of our existence, and that we inseparably bear along with us prior to every objectification. The social world is already there when we come to know it or when we judge it. An individualistic or sociological philosophy is a certain perception of coexistence systematized and made explicit. Prior to this coming to awareness, the social exists silently and as a solicitation.” ( Phenomenology of Perception p. 379)

In our above discussion of behavioural science we discussed a person hitting themselves on the jaw repeatedly, the ABA beginning point was to count the amount of times the behaviour occurred, this approach in effect treats the person as an object in the world like any other object. The aim was to measure the intensity, duration, and temporal locus of an event because this type of accurate measurement made prediction and control of the subject more likely. This evidence-based approach is a useful tool which is favoured by many services because it is easier to justify spending on a form of therapy if one can objectively characterise its utility. However, from a phenomenological perspective this behavioural approach is open to criticism for ignoring a key form of evidence. The phenomenological approach asks us to key in on our lived experience of the world and this lived experience is sometimes ignored by behavioural methodology.

            From a behavioural perspective if an instance of challenging behaviour occurs the emphasis is on finding antecedents, measuring the behaviours that occurred during the incident, and measuring the consequences of the incident. Overtime, as we classify behavioural patterns into groups and measure how these patterns change based on treatment, we can map all this data statistically to demonstrate the rate of behaviour change. However, in all this measuring and classifying the lived world of experience is pushed out of our description of events.

In his Phenomenology of Perception Merleau-Ponty emphasized the lived world of experience which is at odds with the ABA conception of humans as just objects amongst objects waiting to be measured:

“…the whole escapes the well -known instruments of physical-mathematical analysis to open onto another type of intelligibility.” ( ibid p. 11)

When Ponty is criticizing the idea of studying humans using the tools of physical-mathematical analysis; he is attacking both empiricists such as Hume and the Logical Positivists. On Ponty’s view empiricists are guilty of ignoring the background conditions which give our observations sense. However, while his criticisms were directed to logical positivists and empiricists, they have clear relevance to the practice of ABA. Below Ponty describes what it is that he thinks the likes of ABA practitioners are missing:

“… we find, as a fundamental layer, a whole already pregnant with sense. This is not a series of incomplete sensations between which memories would have to be embedded, but rather the physiognomy-the structure of the landscape or of the world-spontaneously in accordance with our present intentions and with our previous experiences. Here the true problem of memory’s role in perception appears, and it is tied to the general problem of perceptual consciousness. It is a question of understanding how consciousness-by its own energy and without bringing along any additional materials in a mythical unconsciousness-can, with time, alter the structure of landscapes; how at each instant, its previous experience is present to it in the form of an horizon that it can reopen, if it takes the horizon as a theme for knowledge in an act of remembering, but can also leave “on the margins” and that thus immediately provides the perceived with a present atmosphere and signification. A field always available to consciousness that, for this very reason, surrounds and envelops all of its perceptions; it is an atmosphere, an horizon, or even the “settings” that assign consciousness a temporal situation-such is the present of the past that makes distinct acts of perception and remembering possible. To perceive is not to experience a multitude of impressions that bring along with them some memories capable of completing them, it is to see an immanent  sense bursting forth from a constellation of givens without which no call to memory is possible. To remember is not to bring back before the gaze of consciousness a self-subsistent picture of the past, it is to plunge into the horizon of the past and gradually to unfold tightly packed perspectives until the experiences that it summarizes are as if lived a new in their own temporal place. To perceive is not to remember. (ibid p. 23)

In the above lengthy quote Ponty is describing our perceptions and their relations to memory in a vastly different manner to the way in which ABA practitioners do. Consider our above example of a service user engaging in self-injurious behaviour. We have seen above how an ABA practitioner would address the situation in terms of quantification of the class of behaviour, the number of times the behaviour occurred, and antecedents and consequences of the behaviour. But Ponty’s emphasis would be on different factors.

            When you see another human engage in self-injurious behaviour you do not just see the behaviour from the outside as a passive observer. Rather you are caught up in the events occurring. While you can try to count the behaviours occurring, and afterwards analyse what has happened; at the moment the behaviour is occurring, you are a participant in an event and not an observer of an event. You see a person in distress who is hurting himself; this behaviour impacts on you emotionally. Furthermore, when this distress occurs memories at the horizons of your consciousness will become more prominent, your memories of previous experiences similar to this one will flood forward, these memories will colour how you experience the current event.

            As well your memories colouring how you deal with the self-injurious behaviour; your mind will also be directed towards the future. Based on your past experiences you will anticipate how long this behaviour may go on for and this will colour how you feel about this behaviour. All of this happens in the moment. But each moment floods into the next one and your engagement with the service user as you try to calm them down will ensue a flow of events that is unpredictable. Any behaviour you engage in to minimize the self-injurious behaviour will impact the consciousness of the service user you are working with. The service user may recall instances in the past where staff try to intervene when they engage in self-injurious behaviour and this recollection will colour how they experience the intervention. Their past experiences of interventions may colour how they view this current interaction.  Furthermore, they may have an idea of how much frustrations the interventions typically bring them; this may cause them to anticipate an extended period of frustration[2]. In this interaction you have two people engaging in a struggle, who interact and react to each other in a dynamic process that is not subject to prediction and control.  For any idea of measuring the behaviour of the service user to make sense; one would need to also put in place measurement system for the staff, which detailed their history of interactions with the service user and a system to analyse the culture of interaction in relation to group dynamics. Furthermore, an analysis of the dynamics that shaped this group would be necessary. When trying to quantify a service users behaviour ABA style a more contextual approach is needed. No behaviour occurs in isolation.

            An approach to a person that engages in self-injurious behaviour that treats the behaviour as an independent event to be measured, will ignore the fact that the person doing the measuring is not an objective scientific instrument like a thermostat or a camera but is rather a subject engaged in the drama unfolding in the moment.  Whether the person is aware of their own behaviour, feelings and how they are affecting the way the event unfolds or not they will still be a part of the event. And being aware of themselves and lived experience in the moment can help them modify their role in the event that is unfolding and this may offer an insight that is more useful than simply passively measuring the challenging behaviour.

            So, for example, a health care worker experiencing a service user engaging in a bout of challenging behaviour such as self-injurious behaviour and loud screaming may re call that in the past this behaviour lasted for hours. This recollection may make the worker experience frustration, which will make them tense and hence less effective in implementing calming procedures and could result in the behaviour getting worse. No amount of observing the behaviour of the service user will tell you why they their behaviour got worse unless you factor in your own feelings and behaviour that played a key causal role in the unfolding drama.

            One difficulty in understanding in these situations is that we are not always good at that type of self-reflection:

       “In general, a manner of thinking that is unaware of itself and that is at home in the things cannot be refuted by describing phenomena. The physicist’s atoms will always seem more real than the historical and qualitative picture of this world; the physico-chemical processes more real than organic forms; empiricisms psychic atoms more real than perceived phenomena; and intellectual atoms (namely, the Vienna Circle’s “significations”) more than real consciousness, so long as one seeks to construct the picture of this world, life perception, or mind, rather than recognizing the experience we have of them as the immediate source and as the final authority of our knowledge.” (ibid p. 24)

Here Ponty is noting that to a person so committed a scientistic philosophy the world as we experience it will always recede to a background and appear unreal. Such unreflective people will either dismiss the lived world of both them and the service user they are working with. However, despite this dismissive attitude; whether they like it or not they are immersed in this world and their engagement in this world will have influences on them that cannot be measured from the point of view of an objective spectator.

[1] Henceforth Applied Behavioural Analysis will be referred to as ABA.

[2] The service user may have difficulties in projecting into the future or in remembering the past and this will impact on how they experience the interaction.

The Batman who Laughs, Freedom and Moral Responsibility

“I take Orwell’s claim that there is no such thing as inner freedom, no such thing as an “autonomous individual,”… That is there is nothing deep inside each of us, no common human nature, no built-in human solidarity, to use as a moral reference point…To be a person is to speak a particular language, one which enables us to discuss particular beliefs and desires with particular sorts of people. It is a historical contingency whether we are socialized by Neanderthals, ancient Chinese, Eton, Summerhill, or the Ministry of Truth.” ( ‘Contingency, Irony and Solidarity’ p. 177)

Batman (along with Superman), is considered one of the most iconic heroes in our modern fictional world. To some Batman is the ultimate hero, as unlike other comic book heroes, he has no superpowers, he just relies on his own intelligence and training to save people in distress. While his persona is more dark and shadowy than the likes of Superman or Captain America, he is typically portrayed as the hero the world needs. But one wonders if the concept of a hero is one applicable to a creature whose actions are the result of a series of contingent accidents. Batman’s traditional biography has it that he was born as a hero when as a child he witnessed his parents being killed in a mugging. This accident of history gave Batman his sense of justice, and propelled him into developing his skills as a superhero. Another accident of history gave him the capacity to become who he was; he was born into one of the richest families on the planet. This gave him obvious advantages over his fellow man as he developed into the hero he later became. He didn’t have to worry about feeding himself or surviving so he could dedicate himself full time to becoming who he wanted to be. 

Each different accident of history could have resulted in an entirely different human being (including things that happened to his parents that may have altered his genetic make up being altered). Given that the type of person we become is contingent on a series of accidents; one wonders the degree to which a person can be held responsible for behavioural traits of which they have no say over. When thinking about what we could have become if things were different; it is helpful to use the tools of possible worlds: world’s which are like the actual world but differ because of a few contingent changes. 

The story ‘The Batman who Laughs’, is a story about a Batman who exists in a different world from the Batman we know (For a detailed synopsis of The Batman Who Laughs story see: . In this world Batman fights and kills the Joker. Unknownst to Batman the Joker has laced himself with a neurotoxin and upon his death it is released and Batman inhales it. Slowly the neurotoxin modifies Batman’s personality until he eventually transforms into a sadistic monster akin to the Joker. This Batman develops a sick sense of humour who enjoys torturing his friends and family. This Batman who Laughs is a terrifying monster, but one with a tragic element. We know the type of person who he was prior to being exposed to the neurotoxin. We know he had little choice in the monster he became and this adds a tragic element to his story. 

From a moral point of view it could be argued that the Batman who laughs made a crucial choice that the Batman we know never made. He chose to kill the Joker; while our Batman always managed to deal with the Joker without resorting to murder. However, while it is possible to judge this choice and place it as a key causal factor in how Batman became the Batman who Laughs, this “choice” has to be placed within a context. Batman was driven to murder the Joker because of savage torture he recieved at the hands of the Joker. This torture may have played a role in his lack of control when he murdered the Joker. More importantly it was the neurotoxin which turned him into the Batman who Laughs and this neurotoxin could have been administered without the Joker being murdered. So it is fair to say that Batman had little choice in becoming the monster he became. 

To this extent it is also sometimes argued that while The Batman who Laughs is a tragic figure The Joker is evil because he consciously embraces the chaos and chooses to be who he is. However, this way of parsing the issue is implausible. Unlike Batman there is no cannon origin story of The Joker. His character has undergone various iterations and reimaginings. But the ambiguity of his origins aside; there are (or would be if he actually existed) some causal antecedents which lead to him becoming who he became. Some genetic markers which made him more likely to become a psychopath. He presumably had a life history where his genetic predispositions were molded by the environment he was shaped in. Despite the apparent glee the Joker takes in his diabolical acts there is little doubt that his personality would have been shaped by causal antecedents that are beyond his control. 

Despite our intuitions to the contrary, it would appear that the Joker, the Batman, and The Batman who Laughs had little say in who they became and so it would appear that we have little cause to morally judge any of them either as either good or evil. 

There is however another way to look at this issue. This issue relies on focusing on levels of explanation. Some scientists argue that colour doesn’t exist in the objective world. In the world as revealed by physics there is no colour. Colour is a feature of our neuropsychology that results from our brains interpreting light waves that reflect off objects, hit our retina and are interpreted by the occipital lobe in our brains. On this level some scientists argue that colour is a secondary quality one that exists only to perceivers with our kinds of brain, but colours have no mind- independent existence and don’t make up the ultimate furniture of the universe. 

Philosopher Daniel Dennett notes that despite the fact that physicists have no use for the concept of colour in their basic science there is little call to purge the concept of colour from our daily activities. Our rules of the road tell us that we must stop at a red light and we can drive slowly in an orange light, and are free to drive if the light is green. If a person drives through a red light and causes a car crash resulting in a death the person will be legally culpable. A physicist who argued that “because colours don’t exist at the level of basic physics, a person who drove through a red light and killed someone would not be morally culpable”; would be laughed out of court. But in a sense my above argument about the Joker and Batman is using a similar move. I am arguing that at one level of explanation people’s behaviour can be explained in terms of causal antecedents beyond the person’s control; therefore we cannot hold the person responsible for their actions.

But our intentional explanations in terms of beliefs and desires are a key tool we use in coordinating our behaviour in relation to each other. Our entire legal system, and moral system is meshed up in explaining our behaviour in light of beliefs and desires. We assume that people act on reasons and these reasons can be good or bad; all the while acknowledging that at the level of basic physics, beliefs do not exist. 

At the level of free choice there is little evidence to support suggestions it exists, if our universe is deterministic and causally closed there is no room for free choice, an indeterministic universe would leave no room free choice either. So from a scientific perspective, like colour, we are justified in arguing that freewill doesn’t exist. But from a pragmatic perspective of coordinating behaviour, like colour; freewill is a useful tool, whether it exists in the fundamental furniture of the universe or not.  

Dennett cites a paper by Vols and Schooler ‘On the Value of Believing in Freewill: Encouraging a Belief in Determinism Increases Cheating’, which showed experimentally that subjects who read a text which encouraged a disbelief in freewill were more likely to cheat on a task than controls who didn’t read these texts. Vol’s and Schooler’s study is a single study and it hasn’t been replicated thus far, so we shouldn’t take its results as sacrosanct. But it does indicate that disbelief in freewill may lead to behaviours that are problematic. 

We know from behavioural science that by changing the consequences of certain behaviours we can increase or decrease the likelihood of the behaviours occurring. If Vols and Schooler’s study is replicated we will have good reason to think that a belief in freewill and moral responsibility will make the likelihood of moral behaviours we approve of increase. But it is early days and it would be too soon to say for sure; much more experimental data is needed. 

So where does this leave us with the Batman who Laughs and the Joker. In both cases we have good reason to judge them as criminally insane. Both while highly intelligent do not have the emotional wherewithal to curb their criminal behaviour. Therefore while they should be locked away forever they are not morally culpable.In the case of the Batman, pre his transformation, who killed The Joker: things appear to be a bit different. Here despite his torture we have a man who we know may have the cognitive control to desist from committing the murder. Though there were extenuating circumstances in leading him to committing the murder, we do have some reasons to judge him culpable. We are judging him culpable for pragmatic reasons, (consequences for society), not for metaphysical reasons. We do have reason to think he should have done otherwise.  In a society where such acts are viewed as simply deterministic acts with no moral connotations; this may lead  to a society which will increase the likelihood of amoral behaviour; just like in Vol’s and Schooler’s experiment. 

Every Thing Must Go and Institutional Power

“We demarcate good science-around lines which are inevitably fuzzy near the boundary-by reference to institutional factors, not directly epistemological ones.” (Every Thing Must Go p. 33)

‘Every Thing Must Go’ relies heavily on the notion of scientific institutions. Their justification for this reliance is pragmatic; i.e. these institutions have been highly successful in the past so we should trust them. They have a pragmatic view of science and argue that attempts of philosophers to explicate necessary and sufficient conditions of something being science will inevitably be inferior as the practices that have been honed through practicing scientists working in the field will yield more reliable results. 

“To reiterate; we assume that the institutions of modern science are more reliable epistemic filters than are any criteria that could be identified by philosophical analysis and written down.” (ibid p. 37)

Ladyman et al’s primary target is analytic metaphysics which they argue constitutes a crude caricature of scientific modelling. They applaud the clear and rigorous style of argumentation used by analytic metaphysicians, however they argue that despite the argumentative style such metaphysicians are engaging in crude pseudo-scientific reasoning.The majority of analytic metaphysicians would describe themselves as naturalists and would consider themselves to be theorising about the world in a way consistent with contemporary science; though using different techniques. 

Despite describing themselves as naturalists such metaphysicians rely on toy models of reality that are wildly at odds with contemporary discoveries in physics. Using examples from contemporary metaphysical papers, Ladyman et al demonstrate that analytic metaphysicians regularly rely on assumptions about the physical world that are not only pre-Quantum Physics, or pre-Relativity Theory but are pre-Newtonian. Self described physcialists gleefully speak of reality of at base being composed of atoms banging against each other. The ontology they presuppose is closer to Cartesian Physics than it is to anything discovered by contemporary physics. Furthermore aside from the crude ontology that is uncritically assumed, a key methodological assumption of analytic metaphysics is that a plus point for a theory is if it chimes with our intuitions. Ladyman et al, note that this methodological assumption is wildly at odds with the assumptions in our best science which assumes that our brains which were designed by evolution to mediate our interaction with middle sized objects are terrible at intuiting the nature of reality. 

Ladyman et al aren’t arguing a la the logical positivists that metaphysics per se is a meaningless discipline. Rather they are working within the tradition of people like Chomsky and Quine who argue that metaphysics is a perfectly sensible discipline; but that our best way to draw a metaphysical picture of reality is through understanding the world as our best science describes it. They offer two heuristic constraints on metaphysicians (1) The Principle of Naturalistic Closure: A new metaphysical claim is justified if and only if it involves the conjunction of two or more sciences (one of which is fundamental physics) jointly explain more than was explained by the two hypotheses taken separately (2) The Primacy of Physics Constraint: any theory we can construct must be consistent with our best contemporary physics. 

Ultimately the metaphysical picture they offer in place of analytic metaphysics is a form of structural realism. They argue for this picture based on two famous philosophy of science arguments. (1) The Pessimistic Meta Induction: Scientific Theories in the past with great predictive and explanatory success referred to entities we know today do not exist; e.g. Caloric and Phlogiston.  Therefore it is likely that our current scientific theories refer to entities that do not exist. Hence we shouldn’t be realists about the entities presupposed in scientific theories. (2) The No Miracles Argument: Given the pragmatic and predictive success of our best scientific theories; if they don’t pick out real things in the world their success is a miracle. If germ theory isn’t picking out real things e.g. germs then its success is inexplicable. The no miracles argument and the pessimistic meta induction are in tension with each other. Ladyman et al try to resolve this tension by arguing that we can have our cake and eat it. We can be realists about our scientific theories and account for theory change by arguing that what is preserved across scientific theories is not things but structure. 

The metaphysical theory they argue for is compelling and is a parsimonious way of thinking about the history of science. Furthermore their criticisms of analytic metaphysics for its reliance on intuitions and pre Newtonian models of physics are to the point. While I share their respect for science and their naturalism I am unsure of what to make of their claims about the institutions of science. Science is clearly our most successful way of interpreting reality and it gives us a level of prediction and control that is far superior to any other academic discipline. Nonetheless scientific institutions aren’t always the best metric as to what counts as a reasonable question to ask. 

Ladyman et al suggest that asking what hypothesis’ would be accepted for a grant proposal is a reliable determiner of the validity of the hypothesis:

“Here, then, is such a norm. Almost all successful participants in ‘bona fide institutional science’- on which we will say more below-learn in graduate school,  or soon after, which sorts of hypotheses one cannot propose as the targets of investigation in a grant proposal to a ‘serious’ foundation or funding agency with non-zero prospects of success.”  (ibid p. 33)

They even jokingly ask us to imagine the prospects for an analytic metaphysician seeking funding in a particle physics department. The prospect of a metaphysician appealing for funding in a physics department is hilarious; one could imagine them being laughed out of the room. But one wonders whether this would be a reliable metric?

In the Harvard psychology department  in the late forties if you were to seek funding for studying the neural correlates of consciousness you would have been laughed out of the room. With B.F. Skinner’s brand of behaviourism being the dominant strand of psychology in America of the time, studies of subjects such as consciousness were deemed to be beyond the pale in terms of science. Pre Skinner behaviourists would have rejected studying consciousness on methodological grounds. They would have argued consciousness existed but couldn’t be studied scientifically because of its subjective nature. So early behaviourists would leave it to philosophers to speculate over the nature of consciousness. Skinner’s radical behaviourism held the position that consciousness was real but was best understood interms of behaviour. He would have argued that to study consciousness the best approach would be to study how the contingencies of reinforcement were arranged to help us develop tools to speak about our private experiences. The key point is that during the heyday of behaviourism certain questions about the nature of consciousness would have been deemed silly and would have been laughed out of court by the dominant scientific institutes of the time. 

Even after the rise of cognitive science in the 50s most psychologists or neuroscientists would have been afraid to seek funding for a study on consciousness. It was only in 1990 when people world famous scientists such as Francis Crick started studying consciousness that the taboo was broken. When Ladyman et al make the following claim:

“We seek a principle, referenced to the institutional factors that make science  epistemically superior, for distinguishing well-motivated from ill-motivated metaphysical proposals; we do not seek a principle for distinguishing sense from nonsense. (ibid p. 34)

One wonders whether they would argue that the institutional factors that kept consciousness a taboo subject matter in science for over fifty years were justified?

And it’s not just in psychology that institutionally sanctioned taboos are enforced. In his book ‘What is Real? Adam Becker tells the story of how physicists who wanted to interpret Quantum Mechanics were actively ran out of the field. The dominant mantra in Physics was ‘shut up and calculate’, physicists who questioned this approach and offered an interpretation to explain the experimental results in Quantum Mechanics risked serious harm to their career prospects. So here again one wonders whether Ladyman et al think that the institutional factors in physics that were used to weed thinkers like Tim Maudlin, and David Albert out of the physics department and into the philosophy department were justified?

While I admire Ladyman et al focus on scientific practice and criticisms of philosophy that is wildly at odds with what we know about the world through experimentation. I think their views of institutional science are too rationalistic. Thomas Kuhn’s historical approach to understanding scientific revolutions reveals scientific institutes as a much more human and fallible and tribal scene than the objective picture presented by Ladyman et al. And as we have seen with our brief discussion of behaviourism and quantum mechanics sometimes dominant scientific institutes can stand in the way of understanding not enhance it. 

Le Doux, Jablonka, the Cambrian Explosion and Anxiety

“Survival behaviours thus have very old roots that make them universal. But the kinds of experiences humans call conscious feelings-that is emotions-I propose are a much more recent development, possibly emerging via evolutionary changes in the human brain a mere few million years ago” (Le Doux ‘A Brief History of Ourselves’ p.4 )

In Jablonka and Ginsburg’s[1] recent book ‘The Evolution of the Sensitive Soul’, they argued that once animals developed an ability called ‘Unlimited Associative learning’ as a by-product they would acquire sentience. Their fascinating and detailed story attempted to discover how sentience evolved and traced it back 550 million years ago to the Cambrian Explosion. While the story they told involved detailed scientific analysis it also had a dramatic feel. In the world of the Cambrian explosion you not only had the most complex predator/prey interaction in the history of the world up to that point; but according to Jablonka, creatures of the Cambrian explosion were creatures whose lives involved constant anxiety. These creatures were the first creatures to have feelings about the world they inhabited; however, they had not developed systems that helped them manage these feelings. So, they lived in a state of constant fear. Jablonka goes as far as to imply that such creatures lived in an emotional paranoid state that was a-kin to psychosis:

“The Cambrian probably witnessed the first nervous breakdowns: the first Cambrian subjectively feeling animals would have been nervous wrecks, with the equivalents of paranoia and posttraumatic stress disorder (PSTD) Dominating their mental lives (‘The Evolution of The Sensitive Soul p. 430)


Jablonka’s grim but dramatic story portrays the Cambrian Explosion as a kind of subjective nightmare for its inhabitants.

The story she tells in one that has interest for the philosophy of mind, biology, neuroscience etc. Another area where her story of the evolution of consciousness will have importance is in the subject matter of psychiatry. When scientists try to understand human suffering, and mental pathologies, they partially rely on models derived from the study of other mammals.  The assumption is that such mammals because they have similar brain structures as humans; provide reliable but fallible ways of understanding the neuroscience of human suffering. Scientific studies of mammals in various different threatening situations indicate that they behave in ways that are similar to the way humans behave (the fight or flight response), and that their brain states when they engage in fear behaviour, are similar to our brain states when we engage in fear behaviour. In all mammals the walnut area at the centre of the brain (The Amygdala) plays a key causal role for in their fear behaviour. For this reason when psychiatrists and psychologists are trying to understand psychological disorders such as anxiety disorders, or phobias they heavily rely on animal models.

The reason that we rely heavily on animals when trying to understand human behaviour is that it is easier to study animals directly than it is to study humans. There are obvious ethical constraints on doing invasive experiments on humans. When it comes to studying other animals, millage varies on the degree to which we can study them invasively. Humans are typically reticent when it comes to studies on animals who they have emotional attachments to; such as fellow primates e.g. Chimps, or Dogs.  Such reticence will be tempered when experiments seem likely to cure diseases that plague humanity such as cancer or various neurological disorders. But unless there seems to be a direct practical benefit in doing invasive experiments most humans would deem it an unjustified practice[2]. Hence the majority of invasive research on animals is performed on animals who humans have a less close attachment to; Rats are a paradigm experimental animal. As mammals they have brains which are structured similarly to our own and Rats don’t typically trigger our empathy; hence they are the go to animal neuroscientists study when trying to understand the neuroscientific nature of our subjective experiences.

Studies of Rats indicates that not only do they have brain areas structured similarly to our own but they also have behavioural patterns similar to ours. Jaak Panksepp has argued based on behavioural and neuroscientific evidence that all mammals share with humans seven basic emotional systems: (1) Fear, (2) Panic/Grief, (3) Seeking (4) Lust, (5) Care (6) Play (7) Rage. There is little debate with Panksepp’s characterisation of the similarly structured brain states and behavioural patterns in all mammals. The issue is with whether all mammals actually subjectively experience fear when they engage in fear behaviour. For Higher Order Theorists of Consciousness like Joe Le Doux there isn’t sufficient evidence for us to attribute conscious states to non-human mammals as they lack certain cortical features that Le Doux deems necessary for a state to be conscious.  In his book ‘Anxious’ Le Doux argues that the ineffective nature of contemporary neuro-pharmacological treatments for mental illness stem from the fact that theorists think that the drugs are attacking conscious feelings; when in fact the drugs are merely attacking behavioural responses (a rat freezing) that may not be accompanied by subjective states of any kind.

Given that the debate as to whether all mammals have subjective experiences has a practical bearing on subjects such as neuropharmacology; Jablonka’s argument that subjective experience predates mammals by millions of years isn’t just a theoretical but a very practical question.



Ginsburg and Jablonka’s Evidence

“Using these criteria, we identify four new transitions in the evolution of neural animals: the transition from multiple exploratory and directed reflexes to limited associative learning; the transition  from limited associative learning to UAL and minimal consciousness;  the transition from imagination to symbolic representation and communication” ( ‘The Evolution of the Sensitive Soul’ p. 343)  

Like Le Doux, Jablonka focuses on the evolution of movement and on creatures who have the capacity to move towards nutrients and away from danger. Even simple Bacteria have this capacity. Jablonka agrees with Le Doux such simple creatures probably do not have sentience.  Jablonka’s story diverges from Le Doux’s story in that she argues that with the evolution of Unlimited Associative Learning creatures develop consciousness, whereas Le Doux argues that higher cortical areas are the magical sauce that creates consciousness.  However, before discussing Unlimited Associative Learning, I first need to outline the nature of its simpler ancestor Limited Associative Learning.

Limited Associative Learning: Skinnerian Organisms

The evolution of Limited Associative Learning[3] provided a massive advantage for creatures who had it. The advantage it provided can be seen in the amount of living creatures today who still use it as a form of learning. Jablonka notes that as well as fish, amphibians, reptiles, birds, and mammals, even invertebrate groups including nematodes, Platyhelminthes, crustaceans, chordates, arthropods, annelids and molluscs display LAL (ibid p. 294).  She also notes that all animals that demonstrate evidence of limited associative learning are bilateral, have brains, and probably evolved in the Cambrian era (ibid p. 293).

In order to understand LAL one must first understand how it relates to conditioning. Since Pavlov discovered classical conditioning and Thorndike and Skinner discovered operant conditioning there has been much debate on how they relate to each other. The primary question that people have wanted to resolve is which is more fundamental; classical or operant conditioning? The consensus; backed more by tradition than evidence, is that classical conditioning is the fundamental form of conditioning and operant conditioning was derived from classical conditioning.

Jablonka, parses classical conditioning as a type of perceptual learning, and operant conditioning as a kind of world learning. In classical conditioning a particular stimulus (food being presented), causes an unconditioned response (salivating). By pairing the food stimulus constantly with a neutral stimulus (a bell ringing) the animal eventually develops a conditioned response of salivating when the bell rings. In this experimental setting the animal is being passively conditioned to associate one stimuli with another stimuli. In operant conditioning the animal is more active. The animal engages in a type of behaviour and this behaviour have certain consequences. A form of behaviour which becomes more frequent as a result of a certain class of response is said to be positively reinforced.

Jablonka noted that despite the distinction between classical and operant conditioning seeming to be very clear; it is a hard distinction to parse from an experimental perspective. Pavlov’s experiments did stop the dog from exploring their environment and so to a certain degree did manage to focus on the dog’s perceptual associations instead of consequences of the dogs’ exploratory actions in the world. Jablonka argues that despite some limited success (Colomb and Brembs 2010), in disassociating the two different forms of conditioning; in reality they form a continuum. Operant conditioning is a type of conditioning involved in self-learning; which involves assigning value to a particular action or movement, and classical conditioning is a form of  world learning which involves assigning value to sensory stimuli (ibid p. 298).

Jablonka defines LAL as follows:

“We define LAL as conditioning that includes both self and world learning and involves the formation of predictive relations between non compound stimuli, actions, and reinforcers” ( ibid p. 321)

To explain LAL Jablonka leans on the phenomena of blocking which she argues demonstrates an organism inhibiting the effects of learning irrelevant stimuli. The psychologist Leon Kamir first studied blocking, where the presentation of a new CS along with an already perfectly predicted CS fails to support new learning (ibid p. 322). Jablonka describes this in terms of an organism predicting reinforcement. She situates this in the prediction error paradigm. What an animal already knows (certain smells lead to food), doesn’t need learning, but if an animal learns that (a bell ringing) also lead to food this will be a ‘surprise’ and will require the animal to update its model of the world. Jablonka parses this as an animal constantly generating predictions about the value of the stimuli it encounters. With a novel, surprisingly reinforced input the most efficient strategy is to adjust the predictions to minimize future surprise (ibid p. 322).

Adopting this approach to LAL can help us explain the nature of blocking:

“A model of blocking posits that the effect of the new concurrent stimulus Y is inhibited because there is no difference between the reinforcing effect of X and that of XY. This makes evolutionary sense: mobile animals with LAL encounter many incidental that co -occur with and already reinforced stimulus but do not contribute to reinforcement, and learning about these stimuli would be a waste of time and energy.” (ibid p. 323).

                          Unlimited Associative Learning and Minimal Sentience


Jablonka argues that UAL has four key features:

(1) Hierarchical processing, which enables compound perception and compound action.

(2) Integration between perception and action models.

(3) Memory for compound percepts.

(4) A flexible global reinforcement system. (ibid p. 360)

In essence UAL involves a creature being able to string together non-reflex eliciting input such as a specific conjunction of several features (green, stick shaped, large), or the stringing together of a non-reflex series of actions. For Jablonka with the arrival of UAL we get creatures with the capacity for minimal consciousness. But a problem that she associates with this arrival of consciousness is that unlike with LAL, UAL doesn’t have a built-in capacity to stop overlearning. Hence, she argues these creatures were not only capable of minimal sentience these creatures were capable of severe anxiety.

A Few Brief Criticisms

A key problem with the story that Jablonka tells is that while her behavioural capacity story is well justified her projection of consciousness goes beyond the evidence. Detailed studies over the last fifty years give us strong evidence that intuition aside complex behaviour is not a reliable indication of consciousness. Le Doux recounted some of these reasons in his recent book ‘The Deep History of Ourselves’: (1) Split Brain Patients: when the corpus collosum is cut in human subjects stopping the left hemisphere from communicating with the right hemisphere. Subjects respond to commands that are flashed to the left hemisphere (the non-verbal part of the brain). If ‘Stand Up’ is flashed to the subject then they will stand up. When asked why they will give a confabulated reason. One interpretation of this is that the behaviour of ‘standing up’ is an unconsciously controlled behaviour. (2) Blindsight: After a stroke patients are blind to things on the left side of their visual field. Although the patients claim they cannot see on their left hand side of their visual field they can accurately grasp for objects on this side and perform better than chance on experiments indicating that they were perceptually registering objects on this side even though they had no phenomenal consciousness. Again, this indicates that some complex behaviours don’t need consciousness to perform them. So why assume that creatures in the Cambrian Explosion were conscious? (3) Patients who had their hippocampus removed causing global amnesia. Such patients when giving verbal reports indicating they were not updating their memories. Nonetheless these patients retained learn complex motor tasks. So despite their conscious memory being destroyed they could still learn procedural tasks. Indicating that their complex motor behaviours need not be conscious (Le Doux ‘The Brief History of Ourselves pp 266-268). Overall what these studies indicate is that complex motor behaviours do not require consciousness. So Jablonka’s assumption that creatures in the Cambrian who had UAL also had minimal consciousness is somewhat of a stretch. Jablonka hasn’t done enough to distinguish unconscious motor behaviour from consciousness driven behaviour.

At this stage the empirical doesn’t conclusively support or refute Jablonka’s claims. To use a tired cliché; more research is needed. A possible way out of this impasse is in Le Doux’s claim that a central weakness with neuro-pharmacology is that it is based on animal models that track fear behaviour instead of the feeling of fear. In his book Anxious Le Doux suggested a different way of understanding Anxiety. If future neuro-pharmacological studies based on Le Doux’s recommendations are more pragmatically successful than current techniques, then this would be evidence that his theory of consciousness is superior to Jablonka’s theory.

Once this data is in we will have more evidence to help us decide between the competing pictures of the Cambrian Explosion. We will have data that will help us decide whether the creatures of the Cambrian Explosion were perpetually horrified creatures like characters from a David Lynch Film or whether they were merely beautifully designed unconscious robots competing with each other.

[1] Jablonka and Ginsberg are co-authors of ‘The Evolution of the Sensitive Soul’ throughout this blog-post for ease of reading I will refer to Jablonka as short hand for Jablonka and Ginsberg. I use Jablonka’s name as opposed to Ginsberg because I am more familiar with her work. But the post is about the book they co-authored.

[2] Obviously experiments still go on today on Chimps, Dogs etc. and sometimes these experiments are done for commercial profit and not to cure diseases that plague humanity. However, overall scientists need to justify their experimental practices on animals now to a greater degree than a hundred years ago.

[3] Henceforth Limited Associative Learning will be referred to as LAL.

The Rise of The Care-Bots

In his 2017 blog-post ‘The Carebot Dystopia: An Analysis’ Patrick Danagher criticized the idea that future care provided primarily by robots would necessarily lead to a dystopian state of affairs. Danagher’s blog-post focused on two papers which painted dystopian pictures of the robot care world: ‘Coecklberg’s ‘Artifical Agents, Good Care, and Modernity’, and Sparrow and Sparrow’s ‘In the Hands of Machines? The Future of Aged Care’. Danagher’s argument, as well as the argument of the papers he critiqued, focused primarily on the care of the elderly; which is an understandable focus. As he correctly noted people are living longer and longer, and the older we get the more likely we develop diminished capacities in a variety of different areas. The elderly don’t necessarily need to be cared for, some people in their 90s live independent lives without any care. Nonetheless as we age we typically require care as a function of some form of disability which may be a consequence of aging. Various disabilities in mobility become more likely as we age, and likewise cognitive disorders such as dementia become more likely as we age. So given that humans are living longer and longer finding better techniques to care for the elderly becomes more important.

While Danagher’s focus on the care of the elderly is understandable, the focus is unnecessarily narrow. Other areas where robot help could become a possibility, and which would raise the same ethical concerns is in the care of the disabled in general. Care-bots can be used to help people with intellectual disabilities and people who are disabled as a result of neurological disorders, strokes, car accidents etc. Since the real focus is on the issue of using care-bots to help people who are disabled; focusing primarily on the elderly, not all of whom are disabled seems to be off target.

One of the key arguments made against the use of care-bots was that such care-bots cannot provide loving care for the people whose needs they are supposed to support, and that they are therefore deficient tools (Sparrow and Sparrow p.17). Sparrow and Sparrow make this claim about the deficiencies of the care-bots a central tool of their dystopian thesis:

“In the meantime, robots are clearly not capable of real friendship, love, or concern- only (perhaps) of their simulations. This has a number of important implications for their involvement in caring roles… as we argued above, robots will by-and-and large not be capable of succeeding in aged-care roles where emotional work is an important part of the role” (ibid p. 17)


Sparrow and Sparrow go on to make an even stronger case that since such robots are incapable of actually caring for those they are serving it is morally reprehensible to use them as tools, because it may result in those in care becoming deluded into thinking that the care-bot is a friend. Danagher gave a convincing argument that there is little reason (other than meat chauvinism), to assume that a care-bot won’t eventually be capable of really empathically ministering for those who employ them. While Danagher has a point; the fact is that we have no idea whether we will ever be able to create care-bots that actually care. Rather than engaging in speculation about whether future AI research will ever solve the hard-problem of consciousness; I think we should be asking a different question. The question I think should be asked is; ‘do disabled people actually want their care-bots to care for them?’.

Research in disabilities studies indicates that people may not actually want a care-bot ( or human carer) to act as a friend. Philosopher Eva Feder Kittay in her ‘Learning from My Daughter: The Value and Care of Disabled Minds’ noted; lots of people with disabilities would prefer to view the people who provide their care as professionals doing their jobs, as opposed to people who are in a loving relationship with them:

“Although family members most often fill the role of the paid attendant, the preferences of many disabled, adults, interestingly, do not favor this arrangement. The sense of independence disabled people hope to attain, is according to some, best served by a paid stranger with whom one has a thin relationship of employer to employee instead of the thick relationships of family. In order to sustain a sense of independence-or as I shall suggest, the illusion of independence, the personal attendant must become invisible. In the words of Lynn May Rivas, the carer needs to turn over the “authorship” of her work to the disabled person” ( Kittay 2019 p. 158)


To such people the fact that a robot which is helping them doesn’t care for them would not be a draw back; in fact it would be a positive thing. In fact one of the benefits some disabled people would see in the use of care-bots, would be that the care-bots are tools that they can use to increase their own independence. People with disabilities like everyone else have friends and family, and are not looking for care-bots to fill a caring void but rather want tools they can use to increase their independence as they interact with their world.

Missing from the normative claims about the use of care-bots is asking the disabled and elderly whether they would like to use them or not. Asking them whether they would want to use them would need to be done in tandem with ascertaining the degree of understanding of the nature of the care-bots. Instead of asking such questions we get the opposite course where what they want is ignored and we are given a normative argument that such bots are intrinsically bad independent of what the patient wants. Now such normative arguments aren’t bad per se but they do have the draw-back of ignoring the descriptive wants of the people in need.

Sparrow and Sparrow do note the importance of ascertaining whether people actually want to be looked after by care-bots. They even acknowledge (in a foot note at the back of their paper) that there is some evidence that some elderly people do have positive attitudes towards care-bots. Though there is still no conclusive evidence, positive or negative, about what attitudes the elderly, who need assistance, or other disabled people would have towards care-bots.

Sparrow and Sparrow worry that even if the elderly do have positive attitudes to care-bots this may reflect their ignorance of the nature of the care-bots. To avoid this difficulty they suggest a deliberative process where they can debate the pros and cons of the issue and come to an informed decision before they decide what their views on the topic are. I am sympathetic to such approaches; but there is a difficulty that such a process leaves out a substantial proportion of disabled people, who may be incapable of engaging in this deliberative process.

There are an incredible amount of people with disabilities who have either no linguistic competence, or who have linguistic competence which is so limited that it effectively bars them from engaging in the deliberative process. People who suffer damage to the Brocas area of the brain (as a result of accident or stroke) can sometimes lose their ability to produce natural language sentences. These people still have preference and wishes, even if they can no longer express these wishes/preference through language. Similar considerations apply to various different developmentally disabled people who for whatever reason don’t develop a fully functional language. Despite the fact that they cannot speak they can think and have preferences about the things they like in their lives.

Now people like Sparrow and Sparrow would probably agree with me that it is a pity that people without a language are barred from the deliberative process when it comes to the care-bot issue. But they could add that there is little to be done on the issue; if people are incapable of engaging in public deliberation, they are unfortunately barred from the choice in relation to care-bots.

I would argue that it would be a mistake in excluding non-linguistic people from having a say in who helps provide their care. We regularly discover people’s preferences by looking at their behaviour. A person’s behaviour can reveal a lot about what they prefer doing. In a care setting people without language regularly indicate their food choices by pointing to a picture of the food they want, pointing to the actual food they want; or by handing back food they don’t want. Even with people who have a language we can discover facts about them by looking at their behaviour. Thus some people make claims that they don’t like Facebook because of ethical concerns about what it does with our data; nonetheless they spend hours every day playing on Facebook. Their behaviour indicates revealed preferences that contradict their express verbal pronouncements. In the case of non-linguistic people all we have to go on is their behaviour when trying to discover what they want.

We do not know whether non-linguistic disabled people would prefer to have their needs met by care-bots. But it is possible to find out by using some care-bots to help them on a limited basis and seeing if they find the process helpful. If they choose to interact with the care-bots as much as they do ordinary carers this is an indication that they find using the care-bots useful. Now doing studies to discover revealed preferences in relation to care-bots would be a difficult thing to research; and would involve complex ethical considerations. Furthermore, it may turn out that after the study was done it was conclusively shown that people do not like interacting with them. But what if non-verbal disabled do like being cared for by care-bots?

Sparrow and Sparrow are committed to the view that care-bots are by nature unethical. They note that the care-bots have the potential to mislead people about their nature and hence are inherently unethical. People will erroneously think that the care-bots are their friends; all the while the care-bots will be nothing more than uncaring mechanical devices. As we saw above not all people with disabilities want their carers to be their friends so this mitigates some of Sparrows concerns. It is of course possible that some people will be deceived about the nature of the care-bots. But if the care-bots do end up increasing the quality of life of the people they help, and if the people prefer interacting with the care-bots; one could argue that this out-weighs any concerns of people being potentially confused about the nature of the care-bots. Furthermore it is of course possible that people will be confused about the nature of their relationship with the people who care for them; such concerns are not limited to care-bots. So it would be unfair to rule out the use of care-bots because they have the potential to mislead some of the people they are helping.

Coronavirus, Intellectual Disability and Language

The Limits of my Language are the Limits of my world”  Ludwig Wittgenstein. 

In a recent interview with Joe Rogan engineer Elon Musk speculated that within three years his neural nets will have rendered language obsolete as a mode of communication. Whatever the prospects of Musk’s wildly speculative views on the future of language as a mode of communication; at the present time language is a vital tool in coordinating our behaviour. In this blog-post I will discuss some of the challenges that face non-verbal people with intellectual disabilities, who are in full time care during the Covid-19 lockdown. My reflections will be to some degree region specific as I will be speaking about lockdown as it was implemented in Ireland. Obviously, however, the difficulties I speak of are ones that universally apply to the problem of maintaining social isolation for people who have extremely limited understanding of the reasons for the isolation. 

Language is a wonderous tool that enables us to achieve magical things. Using words we can refer to objects which are millions of miles away from us (the Sun), we can refer to objects that existed millions of years in the past (a T-Rex) and we can refer to non-existent objects (a Unicorn). Furthermore we can refer to unimaginably large objects (the Andromeda Galaxy), and unimaginably small objects ( a Quark). We can combine our words using grammatical rules that give us the ability to say an unlimited amount of things true or false about the objects we are referring to. We use language not only to speak about our shared world but to discuss possible worlds with various different properties. Not only do we use words to refer to objects in the external world but we also use words to refer to our own internal states. My headache or toothache may not be visible to you but if we speak the same language I can tell you I have a headache or a toothache. 

There are many different people who to varying different degrees lack the linguistic competence we take for granted. There are a dazzling array of different ways people can have difficulties with language. Some people can form sounds and repeat a few words they have heard but lack the capacity to attach concepts to the sounds they can form. Others can use a few words to refer to things in their environment but lack the grammatical capacity to combine words into sentences. Such people can say ‘Ball’ but cannot say ‘I want the Ball’, ‘Isn’t that a nice Ball’ etc. Some people can understand some language e.g. ‘Dinner time’, ‘Time for a Bus Drive’ etc, but are completely unable to produce any language of their own. 

Lacking the ability to use language doesn’t imply lacking the ability to think thoughts. Work in developmental psychology indicates that prior to learning language children have conceptual abilities about object permanence, agency, causation etc ( Carey 2006, Burge 2006, Spelke 1997). There are people who have never developed a language of their own who have thoughts and interests of their own. Nonetheless because such people lack a language their ability to communicate their needs is limited. 

Language and communication are not synonymous. It is possible to communicate one’s needs without being able to speak. Such communication typically involves establishing a joint frame of reference. A person can tell you they want something by pointing at it. Or if they lack the ability to use pointing as a tool they can bring you to the object they want. They can communicate that they don’t want food by handing it back. A person can indicate that they are happy by smiling and sad by frowning. 

But even a simple tool such as pointing isn’t a universal gesture. Few non-human animals can understand pointing at all. And while some non human primates can be trained to point their understanding of the pointing gesture is limited.  Typically developing children begin using pointing at the age of 1. Infants typically use pointing for three different purposes, (1) to request help, (2) to offer information, and (3) to express an attitude (excitement, fear)etc. (Tomasello 2019 p. 98). So when a non verbal person uses pointing to communicate, it takes more than just interpreting the pointing; it involves interpreting facial expressions and behavioural cues to understand the meaning of the pointing gesture. Furthermore, some people with developmental disorders such as autism will have trouble in using and understanding pointing. So they may only be able to tell you they want something by bringing you towards the object they desire. 

Having the ability to understand and use concepts and having preferred activities you like to engage in but lacking the ability to communicate what you want can be frustrating at the best of times. In the case of non-verbal people with intellectual disabilities they rely on their families and carers to help facilitate their needs and ensure that they get to engage in the activities they enjoy. Providing care in this sense involves getting to know them very closely and detailing what they like so you can narrow the search space when trying to interpret what it is they are looking to do during the day. 

Today with the Coronavirus governments have lock-downs in place to try to slow the spread of the virus. The reasons they want to slow the spread is to try and ensure that hospitals aren’t overwhelmed by infected people; as such if the hospitals are overrun they won’t be able to provide sufficient care and more people will die than would have died otherwise. Another reason to try to slow the spread of the infection is that it gives scientists time to try and develop effective treatments for the virus; and possibly even develop a vaccine. 

Understanding why there is a lockdown requires the peculiar features of language we discussed above. Understanding a word like ‘Virus’ even at the most rudimentary level involves being able to refer to something very small and to understand its behaviour by describing its properties. The concept of a  ‘lockdown’, involves an understanding of abstract things like ‘the rule of law’. While an understanding of ‘Social Distancing’ requires an understanding of units of measurement such as distance you stand near a person without being infected, time you can spend near a person and how this effects the probability of being infected. While understanding the possible outcomes if we don’t adhere to social distancing involves either reference to far away in space such as other countries who haven’t implemented social distancing; or far away in time people in the past who didn’t follow social distancing (the 1918 flu). A lot of our thinking about the coronavirus and how we should react to it involves thinking about possible worlds; if we don’t do this we will live in world x but if we do that we will live in world y. 

For the non-verbal person with an intellectual disability they have been thrown into a world where everything is different. As we discussed above they can think about their world, be anxious, scarred about the changes they are going through. They can communicate their immediate needs but without language it is difficult to convey their more complex wants. 

Some effects of the lockdown here in Ireland are that Day Activation Centres are closed, Schools are closed and visiting care homes and community houses are not allowed. So a lot of non verbal people with intellectual disabilities who either live in a residential home or in community housing suddenly find themselves not going to work or school anymore. They find themselves having limited access to their community; they can no longer go to the places they like e.g. they can’t go for a coffee or a walk in the park or to visit friends. Most worryingly they cannot be visited by their family anymore. It is a frightening thing to suddenly be cut off from your family, your community and your daily activities without having any understanding of why. 

While explaining to a non-verbal person with intellectual disability why there is a lock down, or even what a lock down is; is probably a non-starter, there are some practical things that can be done. With technology we have the capacity to engage in live chats with friends and family with apps like zoom or WhatsApp. Even if the non-verbal person cannot engage with the chat aspect, seeing their friends and family and hearing their voices will provide some comfort for them. It hopefully will let them know that their family is still out there and still cares about them.. Another useful tool to implement would be the use of a visual schedule indicating what daily activities are being used. The visual schedule could be mapped out further to take in the weeks and plans could be made for future activities that will become available as the various different stages of the lockdown are lifted. Obviously though the degree to which the use of long term plans based on visual schedules can be work will depend on the cognitive abilities of each different person. Primarily though what is needed is to provide a rich environment in their residence so that they can still enjoy multiple activities so that at least their lives are still rich and enjoyable even while restrictions are in place.