What is a Concept, and what creatures have them?

When considering the question of what a concept is one is faced with a choice. A concept is to some degree a theoretical term that we use to explain, predict and control the behaviour of various different creatures. Now as a theoretical term, a scientist can pretty much define the term as they want. An explicit definition is to be justified by its explanatory and predictive utility within a scientific theory. Now some philosophers argue against this approach because it leaves obscure the relation between the ordinary language concept of ‘concept’, and any theoretical definition of the concept of ‘concept’. Now a scientist could reply to this that it is not his problem to explain the relation between his theoretical terms and the terms of ordinary language. His job is just to construct explanatory theories which help predict and control the phenomena under discussion, if a philosopher wants to worry about the relation between theoretical terms and ordinary language terms this is their prerogative; but it is not the job of scientist to worry about such issues. In fact Dan Dennett has argued that the job of the contemporary philosopher is to help people understand the relation between the scientific image and the manifest image.

In this short blog I won’t offer an explicit definition of what a concept is. I will instead discuss areas where people typically attribute concepts to others and move down the scale towards more ambiguous cases where it is unclear whether attributing concepts is justified. My central message will be that attributing concepts is not something that is done according to a clear cut criterion. There is no magical metaphysical cut off point where creatures suddenly become concept users. There are rather areas where we have less or more justification for attributing conceptual abilities to others.

One area where there is no disagreement on whether we are justified in attributing concepts to others is in ordinary mature language using humans. If a person has a language then everyone agrees that they also have concepts. The reason we feel justified in attributing concepts to people who have a language, is because through tracking their linguistic behaviour, it is possible for us to determine their conceptual abilities. Thus if we have a person who is speaking about Dogs we can determine what their concept of a Dog is by questioning them. The person can answer a variety of different questions about their beliefs on the nature of dogs and from this we can rationally reconstruct what their implicit concept of a Dog is. The fact that a subject can provide justification for his beliefs about Dog’s gives us clear evidence about the nature of their concept of a Dog. This ability to justify and respond to questions by giving reasons is a more rigid criterion than merely being able to discriminatively distinguish between objects in one’s environment. As people like Bob Brandom have noted, if we used the ability to discriminatively distinguish between objects in our environment as our criterion for attributing concepts to others, then we would be forced to attribute concepts to rusting metal, thermostats, flowers etc. So to avoid this overly broad criterion where we are forced to attribute concepts to any entity no matter how dumb, some philosophers have argued that we should limit the attributing of concepts those creatures that have a language.

This move of limiting conceptual abilities to language users (hence to human beings) is typically justified by an appeal to the philosophical views of Ludwig Wittgenstein. This appeal to Wittgenstein’s philosophy as providing a clear dividing line between which creatures have concepts and which creatures do not is ironic. It is ironic because it portrays Wittgenstein as a kind of Platonist who is providing necessary and sufficient conditions for something to count as being a particular concept, and of demonstrating that only language users have the essential criterion necessary for one to be justified in saying they are concept users. This quasi Platonist approach is the opposite of the approach adopted by Wittgenstein so it is very ironic that his name is so often attached to this Platonist world view.

One of the many novel aspects of Wittgenstein’s ‘Philosophical Investigations’ is that he attacked the idea that concepts are the type of thing which have essential features:

Consider, for example, the activities that we call “games”. I mean board-games, card-games, ball-games, athletic games, and so on. What is common to them all?- Don’t say: “They must have something in common, or they would not be called ‘games’” – but look and see whether there is anything common to all. – For if you look at them, you won’t see something that is common to all, but similarities, affinities, and a whole series of them at that. To repeat: don’t think, but look!-Look, for example, at board-games, with their various affinities. Now pass to card-games; here you will find many correspondences with the first group, but many common features drop out, and others appear. Now we pass next to ball games, much that is common is retained, but much is lost-Are they all entertaining? Compare chess with noughts and crosses. Or is there always winning and losing, or competition between players? Think of patience. In ball-games, there is winning and losing; but when a child throws his ball at the wall and catches it again, this feature has disappeared. Look at the parts played by skill and luck, and at the difference between skill in chess and skill in tennis. Think now of singing and dancing games; here we have an element of entertainment, but how many other characteristic features have disappeared! And we can go through the many, many other groups of games in the same way, and can see how similarities crop up and disappear.

And the upshot of these considerations is: we see a complicated network of similarities overlapping and criss-crossing:  similarities in the large and in the small.” (Wittgenstein: Philosophical Investigations 67)

Wittgenstein’s conception of concepts interms of family resemblances has found some empirical support in the work of Eleanor Rosch (proto-type theory) (1970) and George Lakoff (1987). If we adopt this anti essentialist approach we are left with a view where the concept of ‘concept’ is itself a crisscrossing network of family resemblances with no clearly defined essential features located in Platonic heaven.

When we adopt this approach, we are stepping into a similar world as the world of biology, where we stopped asking silly essentialist questions like “when exactly does one species change into another species?” a long time ago. Likewise the question at what moment exactly do concepts appear on the scene makes little sense. This isn’t a matter of adopting a simple “yes” “no” approach but is a matter of degrees of use in adopting intentional level descriptions.[1]

To see the advantage of adopting this approach let us consider the approach we adopt to people with Schizophrenia and various forms of Aphasia. To Platonic essentialists having a concept is a clear cut thing. If you have no language you have no concepts. The cut off point is language and there is little more to be said on the issue. Unfortunately for these Platonists reality doesn’t fit into this neat little binary opposition. The world of biology is a messy one. And these Platonists philosophers need to wake up to the fact that reality doesn’t always conform to their simple a priori intuitions.

Let us consider a case of a language using adult who as a result of a stroke has a form Aphasia which results in the person no longer being able to understand or speak a language. Now imagine[2] that this adult still has the ability to find his way around his environment, he recognises his family members, and responds to them in a similar though diminished manner. Now for the Platonist once this person loses his linguistic abilities as a result of a stroke we are forced to say that the person no longer has any concepts. This approach though seems odd. Say (as is quiet common), such a person still likes the same food, the same music, recognises his wife, his children, family pet etc. This person according to the Platonic essentialist has no concept of his wife, or of his children etc. At best we can say that the person is a reliable discriminator of objects in his environment in the same way as a thermostat is.

The above approach is not logically contradictory. Nonetheless there seems little point in adopting it. The fact that the person makes similar categorical discriminations post stroke,  than they did pre-stroke, and that we can predict the persons behaviour by adopting the intentional stance and attributing conceptual abilities to them seems reason enough to do so. And there is little reason to deny concepts to people with aphasia in the same way we would with people with severe Alzheimer’s who near the end do not seem to have any concepts what so ever. In short there is little pragmatic reason for adopting the approach a person with aphasia, and even less theoretical gain in doing so. None of this is meant as a proof but is rather meant as a reminder of the way we typically apply these concepts.

For a further exercise in imagination stretching consider the case of a person with schizophrenia who is speaking in word-salad. Now there is a long tradition of a psychiatrists and psychoanalysts trying to interpret the language used by a person engaging in word-salad as a coherent story with meaningful concepts. In a famous case Daniel Paul Schreber wrote a book ‘Memoirs of My Mental Illness’ while undergoing a psychotic break. On a straight reading the book is senseless word salad which is intermingled with bits of sense here and there. Nonetheless people like Freud, Lacan, and Matte-Blanco (using the apparatus of psychoanalysis) have managed to give the book a coherent intentional level explanation of Schreber’s book. In more recent times the clinical psychologist Louis Sass has offered an intentional level explanation of Schreber’s book using different theoretical apparatus (phenomenological apparatus).

It should be noted that the intentional level explanations of Schreber’s work remains at the level of a description. And as far as I am aware we have no way of deciding which (if any) of the interpretations of Schreber’s work is the correct one. Furthermore we have no evidence that therapy using either psychoanalytic techniques or the phenomenological approaches above are in anyway successful in treating schizophrenia on its own. Typically the best approach to treating schizophrenia is to use a combination of medical treatment combined with some kind of talking therapy.

The reason I bring up schizophrenia is because it raises interesting issues for Platonic Essentialists about concepts. Does a person who is undergoing a psychotic break have concepts at the time of the psychotic break? Well if the person has a language then according to the Platonic Essentialist he has concepts. But what counts as having a language for the Platonist? Here I think their strict criterion falls by the way side. Language is not a single thing. It consists of multiple interacting components: Syntax, Semantics, Morphology, Phonology, Pragmatics etc. There are mountains of empirical studies into the deviance in syntax, semantics and pragmatics in the word salad type utterances of people with schizophrenia. One wonders which essential components must fail before the Platonist will decide that the person with schizophrenia is no longer a competent language user and hence that we are not at that time justified attributing concepts to him?

On the loose naturalistic approach that I am adopting (basically Dennett’s stance-stance approach) these strict yes, no questions can be dropped. We know that talking therapy[3] can be quiet effective when combined with drug treatment. This fact alone justifies us in adopting intentional level interpretations to people undergoing a psychotic break and treating them as concept users. Appeal to arcane philosophical intuitions about some monolithic entity called language and its importance do not decide the issue. Rather our ability to predict, and hence manage the illness is what matters.

Similar ambiguities arise for things like Williams Syndrome and Specific Language Impairment etc. The fact is that the world is a messy place and the narrow binary intuitions that Platonists appeal to are not very helpful in dealing with such issues. The loose pragmatic approach is the best way of dealing with the complexity of the empirical world and avoids the trivial simplicities of the narrow Platonism I have described.

A critic could argue that the loose pragmatic approach I am adopting here will still result in a kind of free for all where we end up attributing concepts to thermostats, rusting metals etc. I am not sure that this fear is entirely justified. I am not arguing for an anything must go approach to intentional ascriptions. In his (1991) paper ‘Real Patterns’ used the work of mathematician Gregory Chaitin to specify precisely when we have discovered a real pattern. Dennett glosses it as follows:

“Of course, there are bound to be other ways of describing the evident patterns in these frames, and some will be more efficient than others-in the precise sense of being systematically specifiable in fewer bits.” Any such description, if an improvement over the bit map, is the description of a real pattern in the data.” (Dennett: Real Patterns p.33)

Using Chaitin’s approach we can distinguish between purely pragmatic intentional ascriptions to things like thermostats and when we are actually picking out real patterns in the environment. So we avoid the problems which people like Brandom predict about being overly permissive in our attribution of concepts to others if we theorise that creatures that are non-linguistic have concepts.

Finally I have been quiet critical of people who argue that only people with a language have concepts. I have criticised their approach as being too essentialistic. However I think that their scepticism about attributing concepts to non-linguistic animals is very important. This scepticism helps by constantly calling into doubt the constant assumptions of those who are certain that some non-human animals have concepts. This process of criticism can only be a good thing and helps avoid complacent theorising unconstrained by the facts.

[1] To see the Intentional Stance approach in more detail see Dennett ‘The Intentional Stance’ (1987) and Dennett ‘Real Patterns’ (1991).

[2] This is an imaginary case but cases of people with Aphasia which obliterates their language production and comprehension but do not significantly affect a person’s general practical ability to negotiate his way around his social and physical environment are quiet common.

[3] Both Cognitive Behavioural Therapy and Psychodynamic Approaches show similar levels of success when combined with drug based interventions.

Dennett and the typical mind fallacy

DENNETT’S UNCONSCIOUS AUTO-BIOGRAPHY

                                                 INTRODUCTION

 

Gradually it has become clear to me what every great philosophy so far has been: namely, the personal confession of its author and a kind of involuntary and unconscious memoir.  (Nietzsche: Beyond Good and Evil Sec 6)

 

In this paper we will argue that Dennett’s views on the nature of the mind are based on him unconsciously projecting his own idiosyncratic type of mind onto all other people.  We will argue that he is guilty of what William James called “The Typical Mind Fallacy” of assuming that his type of mind is representative of all types of mind. We will show that Dennett has a particular type of consciousness and that his type of consciousness is not representative of the consciousness of all other people. We will demonstrate that Dennett has a “Bodily-Linguistic Type of Consciousness”.  However, prior to doing this we will sketch the typology which we are using to interpret the mind of Dennett and what evidence we have to support the existence of this particular typology.

We are working with the typology which David Berman invented in his 2008 book Penult. This typology divides people into three main divisions: (1) Type 1: Mentalistic Consciousness, (2) Type 2: Bodily Consciousness, (3) Type 3: Socio-Linguistic Consciousness. The evidence for this typology comes from the variability people have in their mental abilities. Some clear examples of mental variability which have been confirmed are:

(1)Galton: Provides evidence that people have different abilities to form mental imagery. See also Kosslyn, James etc.

(2) Dalton’s discovery that some people are colour blind.

(3) Some people have synesthesia; this fact was denied for years but is accepted now.

(4) There are variations in how people experience pain. See, for example, Roger Filligim (2005) “Sex and Gender Issues in Pain”, as well as Young et al. (2008) “Genetic Differences in Pain Variability”.

The above four examples show variation in people’s cognitive abilities and subjective experiences which went undetected for years before being discovered by empirical research. We suggest that such variation in mental abilities inadvertently influences the theories of the mind which philosophers construct. So, for example, there is evidence that people’s psychological abilities affect the philosophical theories they accept; see Berman “Philosophical Counseling for Philosophers” (2013), as well as William James “Principles of Psychology” (1892), and Holtzman “Do Personality Effects Mean That Philosophy is Intrinsically Subjective”? (2013).

Showing that there is variation in people’s mental abilities and that these variations influence the type of theories which philosophers accept is one thing, however it is not evidence for Berman’s Tripartite Typology. In his Penult Berman uses evidence from the history of philosophy, evidence from interviews with living subjects, as well as his own introspective experiences to justify his Typology. His Typology makes sense of the views of the great philosophers throughout history. It also shows why so many of the great philosophers, in the past despite being great thinkers, could not agree with each other about the nature of the mind. All of these philosophers were operating on the mistaken assumption that there is only one type of mind. The diagram below shows how Berman divides the philosophers of the past according to his Tripartite Typology.

 

Type 1  

 

 

Kant

Nagel

Searle

 

Type 2 Type 3
Descartes 

Berkeley

 

 

Plato

James 

Dennett (major2/minor3)

 

 

Spinoza

Russell 

Rorty

(major3/minor2)

 

 

Hegel

 

Berman’s Typology

 

Type 1 philosophers have a strong experience of their own consciousness; this experience is called unbounded consciousness. Type 2 philosophers have no experience of unbounded consciousness; their experience is entirely bodily and linguistic. Type 3 is a socio-linguistic type of consciousness. The various different types of consciousness can be blended, with one of the types being dominant. So, for example, Berman argues that both Rorty and Dennett are a blend of Types 2 and 3; with Dennett Type 2 is dominant, while for Rorty Type 3 is dominant (Penult p. ).

Berman’s three main psychological types are vividly represented by Rene Descartes (Type 1), William James (Type 2), and Richard Rorty (Type 3). The following quote from Descartes illustrates his Mentalistic abilities:

 

 When I consider the mind—i.e. consider myself purely as a thinking thing—I can’t detect any parts within myself; I understand myself to be something single and complete. The whole mind seems to be united to the whole body, ·but not by a uniting of parts to parts, because if a foot or arm or any other part of the body is cut off, nothing is thereby taken away from the mind. As for the faculties of willing, understanding, of sensory perception and so on, these are not parts of the mind, since it is one and the same mind that wills, understands and perceives, They are (I repeat) not parts of the mind, because they are properties or powers of it.  (Descartes: Meditations on First Philosophy p. 11)

 

The above quote illustrates that, as a Type 1 philosopher, Descartes experiences himself as primarily a mental entity. James’ experience of himself is entirely different:[1]

Let the case be what it may be for others, I am as confident as I am of anything that, in myself, the stream of thinking (which I emphatically recognize as a phenomenon) is only a careless name for what, when scrutinized reveals itself to consist of the stream of my breathing. (James 1904)

There is I mean no aboriginal stuff or quality of being, contrasted with that of which material objects are made, out of which our thoughts of them are made, but there is a function in experience which thoughts perform…(namely)…knowing. Consciousness is supposed necessary to explain the fact that things not only are, but get reported, are known. (James: 1904 p.101)

Everyone assumes that we have direct introspective acquaintance with our thinking activity as such, with our consciousness as something inward contrasted with the outer objects which it knows. Yet I must confess that for my part I cannot feel sure of that conclusion. Whenever I try to become sensible of my thinking activity as such, what I catch is some bodily fact an impression coming from my brow, or head, or throat or nose. It seems as if consciousness as an inner activity were rather a postulate than a sensibly given fact. (James: 1892: Text Book of Psychology p. 467)

James above is denying that we introspectively experience consciousness; his distinctive subjective experience is entirely different from Descartes’. While Descartes and James are good exemplars of Type 1 and Type 2, Richard Rorty is a good exemplar of Type 3. Here some Rorty quotes illustrate his type of mind:

The temptation to look for criteria is a species of the more general temptation to think of the world, or the human self, as possessing an intrinsic nature, an essence. That is, it is the result of the temptation to privilege someone among the many actual and possible languages in which we habitually describe the world or ourselves (Richard Rorty: Contingency, Irony and Solidarity p. 15)

This is that there is nothing deep inside each of us, no common human nature, no built in solidarity, to use as a moral reference point… To be a person is to speak a particular language, one which enables us to discuss particular beliefs and desires with particular sorts of people. It is a historical contingency whether we are socialized by Neanderthals, Ancient Chinese, Eton, Summerhill, or the Ministry of Truth. (ibid p 177)

We can see from the above quotes that Rorty is incapable of conceiving of himself independent of language. This is a clear indication that he is a Type 3 thinker.

In his Penult Berman goes into much more detail in defending his typology. Here we have shown what the evidence for the typology is and how it explains the philosophical views of various great philosophers. Throughout this paper we will analyse Dennett’s writings in detail to show that he is a combination of a Type 2, and a Type 3 thinker and that he incorrectly generalises his type of mind to be representative of all minds. We will analyse Dennett’s views on Mental Imagery, Dreams and Pains and show that his views on these topics are derived from his own subjective experience of these phenomena.

DENNETT: EARLY CHILDHOOD AND PHILOSOPHICAL DEVELOPMENT

We will make the case that Dennett’s philosophy of mind is partly derived from his own idiosyncratic psychology. This claim portrays Dennett as having experiences which are radically different from the experiences of other philosophers. This leaves the question of how Dennett has developed the type of mind that he in fact has. Was his type of mind derived from his genetic code, or did socialisation play a more important role? Any answer to this question will be highly speculative as there is little written about Dennett’s upbringing and obviously we have no information about the structure of his genetic code. However, trying to understand Dennett’s life and to what extent his experiences influenced the type of mind he developed will help to make sense of his strange views on the nature of the mind. Dennett wrote a small auto-biography detailing his life experiences. Discussing this auto-biography will help us understand his type of mind better.

Dennett’s brief discussion of his early life and his early education in philosophy is interesting but fairly sparse in details. Certain things do stand out though. Dennett’s early loss of his father was obviously a traumatic experience for him. According to Dennett, he was expected to follow his father into academic life in the humanities. So while Dennett’s father was no longer alive he was present to Dennett as an idealised figure. His father was an idealised figure who people expected Dennett to live up to and to follow into the humanities. Dennett was interested in engineering and loved building things yet was expected to go into the humanities. This situation would create resentment in most teenagers. His own dreams and aptitudes were subordinated to the expectations and wishes of his parents. He managed a minor rebellion against his family’s wishes by not going to Harvard. He instead went to Wesleyan University. He studied Maths and English. It was in his advanced Maths Class that he first discovered Quine’s work when he read his book ‘Mathematical Philosophy’. When Dennett first read Quine’s “From A Logical Point of View” he claimed he was impressed but found something wrong with Quine’s position.  So Dennett contacted Quine and transferred to Harvard. It is worth noting that after less than a year Dennett’s rebellion against his parents’ wishes had ended. He was studying where they wanted him studying. Furthermore, he was studying philosophy a humanities subject. So Dennett’s discovery of Quine was doubly beneficial for him; he had found a thinker who really interested him and he had found a way of pleasing his family (in particular his dead idealised father). Another benefit which the discovery of Quine had for him was that it helped Dennett keep both his family and himself happy simultaneously. By studying Quine he would be studying a humanities type subject (keeping his family happy), while Quine’s interest in natural science and formal logic chimed well with  Dennett’s scientific/engineering interests.  So it is understandable that Dennett was excited by the discovery of Quine, as it helped him to fullfill his own desires and his deep need to live up to his father’s expectations.

Dennett eventually wrote his undergraduate thesis on Quine. He called the thesis “Quine and Ordinary Language”. There are no details of the thesis available though Dennett mentions that it was critical of Quine and that Quine thought highly of it. Dennett said that at this stage of his development, he, unlike most of his fellow students, held Ryle’s work in high regard. He thought of Ryle’s “The Concept of Mind” one of the best philosophy works he had read. Dennett finished college at the age of twenty and at this early stage of his development his primary influences were Ryle and Quine, both of whom were behaviourists. So it is tempting to argue that Dennett’s strange eliminativist views on the mind resulted from his being educated by behaviourists: but this cannot be correct both Nagel and Kripke were undergraduates with Dennett, and they certainly did not end up sharing Dennett’s view of consciousness. So we need something more than the influence of Dennett’s teachers to explain his views on consciousness. Something in Ryle and Quine chimed with Dennett.  We argue that their theories seemed intuitively correct to him because they rang true to him at an experiential level. Their theories of the mind rang true to his own lived experiences. Those with different types of mind, for example Nagel and Kripke, found Ryle’s theory counter-intuitive because the way they experienced the world was so different than Ryle or Quine’s descriptions of the mind.

Because he thought so highly of Ryle Dennett went to Oxford to study under him. Ryle was the supervisor of Dennett’s doctoral thesis which eventually became his book “Content and Consciousness”. Dennett notes that he saw problems with qualia as early as 1963 and even then he believed that our conscious experience only seemed to be as rich as we believed it was. So we can see that from a very young age he already felt that the philosophical community was deeply wrong about the nature of the mind. He would spend the next fifty years developing arguments and finding empirical evidence which supported his intuitions on the nature of the mind.  Pretty much from day one in Oxford he tried to inform his theories of the mind with the best neuroscientific evidence available. His PhD was so deeply immersed in neuroscience that a neuroscientist was asked to examine his thesis. Dennett had managed to follow in his father’s footsteps while still getting to study the science which so interested him.

The most important thing to note about his description of his early childhood development is the fact that from the age of seventeen when he was first exposed to philosophy he was well-disposed to the philosophies of Quine, and Ryle. This indicates that he saw something in their behaviouristic explanations of mind which chimed with his own experience. He felt that ‘qualia’ was a useless theoretical term as early as the age of twenty-three. His entire philosophical life was dedicated to providing philosophical arguments and scientific evidence to support his intuitions of the mind.

What appealed to Dennett about Ryle and Quine was their emphasis on language and behaviour. Subjective experience was not important to either Ryle and Quine and they believed that language played a central part in making us human. Dennett being primarily a linguistic-bodily thinker would have related to Ryle and Quine perfectly.

                         DENNETT ON CONSCIOUSNESS AND LANGUAGE

Dennett’s  theories of mind and consciousness have always emphasised the importance of language for thought and experience. In “Consciousness Explained” Dennett described the self as follows:

Selves are not independently existing soul-pearls, but artifacts of the social processes that create us, and, like other such artifacts, subject to sudden shifts in status. The only “momentum” that accrues to the trajectory of a self, or a club, is the stability imparted to it by the web of belief that constitute, it, and when those beliefs lapse, it lapses, either permanently or temporarily. (Consciousness Explained: p 423)

We suggest that Dennett’s description is of his own type of mind and not of the minds of all people. Language is such a deep part of his self that he is incapable of thinking independently of it. We will provide evidence for this claim by examining his views on Mental Imagery, Dreams, and Colour etc.

The importance of language for Dennett becomes even more apparent when one considers his theory of consciousness. Dennett’s theory of consciousness which in 1996 he dubbed the “Fame in the Brain Model” is meant to replace the tempting idea of the “ Cartesian Theatre”. Throughout his philosophical career Dennett has attempted to replace the “Cartesian Theatre” picture with increasingly more apt models, from his “Multiple Drafts Model” to his “Fame in the Brain” model to his “Fantasy Echo Model”. With his Fame in the Brain model he argues that consciousness is more like fame than television. It is not a special “medium of representation” in the brain into which content-bearing events must be “transduced” in order to be conscious (Sweet Dreams: p. 160). He argues that it is more like fame, where one content bearing event achieves something like fame in competition with other fame-seeking events. This metaphor is obviously imperfect. Fame is typically an intentional object of a number of agents (ibid, p.161), and he concedes that it makes little sense to think of multiple homunculi all holding some content as an intentional object and elevating it to fame. So he instead opts for the less troublesome metaphor of consciousness as “influence in the brain”.

For Dennett conscious events are attention-grabbing events. They are thoughts that persist in the mind for a sustained period of time. To help us think about this fact he asks us to think about them in terms of fame and how long a period a person can be famous for. Dennett plausibly argues that a person cannot be famous for a few seconds; likewise, a person being viewed by millions of people on television will not necessarily be famous. No matter how many people may see me in the background in some sitcom this will not make me famous. To be famous I would have to be a person whom others talked about enough to ensure that my presence reverberated within the stream of public discourse. To achieve this importance within the public stream of discourse takes time; it does not happen instantly. Some may argue that it in fact does occur instantly. They will point to people who seem to achieve instant fame as a result of some event, for example, an assassin of a president may achieve a level of instant fame. Nonetheless, despite appearances, instant fame does not really occur. It is something which is conferred post-hoc as a result of the influence the event has on the public discourse of others. Without the reverberations within the linguistic community the shooter would not achieve fame. Dennett argues that consciousness has similar features to the type of fame that is achieved in the external world.

For Dennett, fame in the brain, like fame in the world, requires echo-making capacities. To say that something is conscious even though it doesn’t have any echo-making properties within the mind is for Dennett senseless. Dennett uses the example of the aroma of classroom library paste as something that has echo-making properties for him. The aroma has strong echoic properties because it evokes various vivid childhood memories and associations for Dennett.

To be conscious is precisely to have a thought that yields enough influence to direct a lot of homunculi to engage with it, he argues that fame is like this too. He also notes that the nature of fame has changed with the invention of electronic media:

“ a recursive  positive feedback became established, dwarfing the initial triggering event and forcing the world to wallow in wallowing in wallowing in reactions to reactions to reactions to the coverage in the media, and so forth.” (ibid p. 167)

He thinks that such fame was very difficult to achieve in the pre-electronic age. He argues that language did for human consciousness what electronic media did for fame: it transformed it.  It is for this reason that he calls us ‘Joycean machines’. The recursive capacity which is a central feature of natural language is one of the key features of consciousness. Dennett even makes the speculative prediction that non-human animals may have little echoic capacities so strictly speaking will not have consciousness. Dennett’s view on animal consciousness is definitely a minority position. The majority of scientists now accept that animals are conscious. Obviously, science is not a majority rules activity so the fact that Dennett’s position is not accepted by many does not prove that it is incorrect.  The primary reason that people argue that non-linguistic animals are conscious is because of their complex behavioural capacities. Behavioural tests have shown that some animals have complex concepts of number, agency, causality etc. However, Dennett points to the fact that people often reason with these concepts unconsciously and asks whether we can be sure that animal’s competence with the above concepts may entirely involve unconscious reasoning. He also notes that we cannot be sure that animal’s behaviour isn’t a kind of blind-sight. Such animals would be able to predict certain aspects of their environment without having any conscious experience of doing so. It is worth noting though that some mammals also respond to anti-depressants and anti-anxiety medications in a similar way that humans do. This indicates that their experiences are similar to ours. There is little to recommend the view that such animals are suffering from total blind-sight in all of these areas, though it is admittedly theoretically possible.

Dennett’s speculation that non-linguistic animals may not be conscious is not central to his theory of consciousness. If his claim about the consciousness of non-linguistic animals not being conscious were refuted (and I think it will be), this would not refute his theory of consciousness as influence in the brain. Non-linguistic animals may have basic recursive capacities that make the type of conscious experience which Dennett talks about possible.

At this point I should clarify what Dennett means by recursive capacities. In an obvious sense Dennett cannot mean that recursion is entirely unique to humans. Dennett accepts the computational theory of the brain; he also accepts that the computational procedures govern animal brains. It is a fair bet that the computational procedures which govern human and animal brains sometimes involve recursion. Dennett does not deny this obvious fact about the structure of animal brains. When he says that human language has a unique recursive structure he is speaking of recursive structure as a kind of software. He calls this software a virtual Von-Neumann machine (Consciousness Explained, p.201 ). Dennett argues that the mind is like a software programme installed on the parallel neural networks of the brain ( Densmore and Dennett 1996 p .1). The software is installed onto the hardware of the brain through memes. Our linguistic abilities give us the capacity to understand and produce a potentially infinite amount of sentences. Through communicating with each other humans can learn different ways of thinking about the world. Our culture, made possible by our linguistic capacities, means that every child born into a linguistic community will have the capacity to use the tools for thinking which were created by previous generations, such as mathematics, logic, the scientific method etc. These tools, when learned, install programmes which can give brains the capacity to think in ways which are far superior to the ways animals unaffected by culture are capable of thinking. This is analogous to the way in which some software (e.g an app), when installed on a hardware,  gives the hardware capacities which it previously did not have.

We can see from the above description that Dennett’s conception of consciousness is strongly tied up with  linguistic competence. This equating of consciousness with language is so strong that he goes as far as to deny that animals are conscious. Dennett’s equating of consciousness with language is at odds with a lot of neurological data. Yet he stubbornly persists in claiming that language and consciousness are intimately connected. A possible explanation for his position is that he is unconsciously projecting his own mental abilities onto those of others. To analyse this contention we will examine his overall views on consciousness. In Part 1 we will examine his views on mental imagery, and dreams. In Part 2 we will examine his views on the nature of pain. We will show by analysing Dennett’s conception of mental imagery, dreams, and pain that he is guilty of unconsciously commiting the typical mind fallacy.

 

PART 1: DENNETT ON MENTAL IMAGERY

We claim that Dennett is a non-imager whose views on the nature of mental imaging are derived primarily from his own idiosyncratic psychological abilities.  We argue for this conclusion by considering various psychological tests which indicate that people do indeed have differing abilities to form mental imagery, ranging from non-imagers to eidetic imagers. We will analyse Dennett’s various different discussions of mental imagery and show through textual analysis where he indicates that he may be a non-imager. We will compare Dennett’s reports of his imaging abilities with those of other people with various different levels of imaging abilities. Overall we will conclude that it is highly probable that Dennett is a non-imager.

                  Section 1: Dennett: Experience is not what you think it is

Dennett discusses pain in the context of a general evaluation of our supposed phenomenological world in Chapter 2 of Consciousness Explained. Dennett proposes a number of experiments, which are designed to show us that our knowledge of our phenomenological world is less accurate than we assume. He proposes an experiment on mental images p. 27, one on our sense of touch p. 47, one on sound p. 49, and one on vision p. 54, as well as the experiment on pain p. 60. Dennett argues as follows from his experiments:

Did anything you encountered in the tour of the phenom in the previous chapter surprise you? Were you surprised for instance that you could not identify the playing card until it was almost dead centre in front of you? Most people, I find, are surprised-even those who know about the limited acuity of peripheral vision. If it surprised you, then that must mean that had you held forth on the topic before the surprising demonstration, you would very likely have got it wrong. People often claim a direct acquaintance with more content in their peripheral visual field than they in fact have. Why do people make such claims? Not because they directly and incorrigibly observed themselves to enjoy such peripheral content, but because it seems to stand to reason…Am I saying that we have absolutely no privileged access to our conscious experience? No, but I am saying that we tend to think we are much more immune to error than we are.  (Consciousness Explained p.68).

The card experiment is easily testable because the card exists in the external world. So when we ask people what card they are seeing we can test their claims against a fact in the external world. Asking people about things like mental images is different; when people make claims such as ‘I am a vivid imager’ we cannot easily test whether such people are guilty of exaggerating their experiences like we can with the card test. Likewise with people’s experience of pain.  Dennett is asking us how we can know that people are not spinning stories based on theories they hold as opposed to observing internal objects?  However, let us assume that his comparison of the card trick with our internal experiences of mental images, and felt experiences of pains, is valid. What would this mean for our theory? Suppose that in the card experiment some of us discovered that our visual experiences were not as accurate as we believe they are, we did not discover that we have no experiences, our experience of the card at the centre of our visual field was ok. It was only our peripheral vision which was affected. If our introspective experiences are analogous to our perceptions and the same errors occur then this means that we have to be careful with our introspection, not that introspection is somehow inherently flawed.

Section 2: Mental Imagery

William James and Francis Galton famously warned us against making the typical mind fallacy, of wrongly assuming that all people have minds which are the same as ours. Galton’s breakfast table questionnaire showed that people have differing abilities to form mental images, ranging from non-imagers to weak imagers to strong imagers to eidetic imagers. Recently psychologists Reisberg, Pearson and Kosslyn’s paper “Intuitions and Introspections about Imagery: The Role of Imagery Experience in Shaping an Investigator’s Theoretical Views” demonstrated that people who were working on the imagery debate were influenced by their own imaging abilities as to what views they held on the nature of imagery. Their study showed that, as scientists did more experiments and learned more about the topic, their imaging abilities played less of a role in what their views on the topic were because they were more influenced by the scientific evidence. Kosslyn however noted that a small percentage of those studied were not swayed by the scientific evidence:

In addition, the VVIQ scores were correlated with the responses for the views in 1980: the results indicated that researchers who reported less vivid imagery were more inclined toward the propositional view. Thus, in the early stages of the debate, researchers’ personal experiences were related to their theoretical stances. In contrast, when the VVIQ scores were correlated with their current views, the scores were found not to be correlated with their current attitude. This finding suggests that scientists really do pay attention to the data and let the data take precedence over their introspections or images. As results from increasing numbers of imagery experiments were reported and the case for depictive representations grew stronger, even many of those with poor depictive imagery became convinced that depictive representations exist and are used in imagery. Nevertheless, some of the extreme cases-who reported no imagery-persisted in denying the existence of mental imagery. (Kosslyn et al: The Case for Mental Imagery p 181)

Kosslyn claims that in some extreme cases people who have no imagery persist in denying the existence of mental imagery. In an online article for edge.org Kosslyn argued that Zenon Pylyshyn was a non-imager whose views on the imagery debate were strongly derived from his lack of ability to form mental images. We aim to demonstrate in this paper that Dennett’s views on mental imagery can also be explained by his being a non-imager.

Dennett typically interprets people’s claims that they think using mental images as being nothing more than a theorists fiction; in other words, a metaphor[2]. The reason he gives for this belief is partly based on his heterophenomenological method and his theoretical belief that, since people have no internal eyes, then having mental images is impossible. We do not doubt that some of Dennett’s reasons for treating mental images as metaphors are theoretical, however we argue that the primary reason Dennett treats mental images this way is because he is a non-imager. This is not meant as any kind of personal slight on Dennett; both of the authors of this paper are extremely poor imagers. In fact one of the authors of this paper, David Berman, has written a paper “Philosophical Counselling for Philosophers” (2008), which recounts how his own philosophical views were inadvertently affected by the fact he is a poor imager.  It is vitally important that such facts are discovered if we are to ascertain the truth rather than merely assume that all thinkers’ minds are the same.

In his “Heterophenomenology Revisited” (2007) Dennett makes clear his views on mental images and his view that they are mere metaphors which describe neurological facts we do not yet understand. I will quote his views in this paper in full as they are very instructive:

The standard presumption is that ‘’I know because I see it’’ is an acceptably complete reply when we challenge a reporter, but where a subject is reporting on mental imagery, for instance, we create an artefact. We ask a subject to tell us how many windows there were in the front of the house he grew up in, and he closes his eyes for a moment and replies ‘‘four’’. We ask: How do you know? ‘‘Because I just ‘looked’… and I ‘saw’ them!’’ But he didn’t literally look. His eyes were closed (or were staring unfocused into the middle distance)…When we confront this familiar vacuum, there is an almost irresistible temptation to postulate a surrogate world-a mental image-to stand in for the part of the real world that a reporter observes…The ‘’recollected image’’ of the house has a certain richness and accuracy that can be checked, and its limits gauged. These limits give us important clues about how the information is actually embodied in the brain, however much it seems to be embodied in an ‘’image’’ that may be consulted. This is where the experimental work by Shepard, Kosslyn, and Pylyshyn and many others comes into play. (Hetrophenomenology Revisited. Pp. 256-257).

One important point to note is that Dennett cites the experimental work of both Kosslyn and Pylyshyn as a way of finding out what is actually going on in the brain. This is interesting because of course Kosslyn is a defender of mental images, while Pylyshyn uses his experiments to cast doubt on the existence of mental images. This may give the impression that Dennett is open to the existence of mental images, that he is merely doubting them for the purposes of collecting heterophenomenological data, which he will later subject to experimental tests. However, his discussion of Shakey the robot shows that Dennett actually subscribes to the more radical view that mental images do not exist.

Shakey is a robot that Dennett introduced into Consciousness Explained for a thought experiment. Shakey is an actual robot that was built in 1966 and that is programmed to move boxes in the room.    Below (fig.  1) is a picture of Shakey as he appeared in 1966 when he was first built:

Fig 1 Shakey

Shakey cannot speak, however Dennett speculates that if we could programme it with a language we could programme it to report that it moved boxes by rotating mental images of the box in its mind’s eye. We as designers know that Shakey does not actually rotate mental images in his mind before moving the boxes.  Dennett suggests that something similar occurs when humans report rotating mental images:

If a human subject says she is rotating a figure in her mind’s eye, this is a claim about what it is like to be her, and indeed she intends to describe, unmetaphorically and accurately, a part of the real world (a part that is somehow inside her) as it seems to her. But she is speaking at best metaphorically and at worst creating an unwitting fiction, for we can be quite sure that the process going on in her head that inspires and guides her report ( to put it as neutrally as possible) is not a process of image rotation. (ibid, p. 258).

Here we can see clearly that Dennett, despite his claims of neutrality, is denying that people really have mental images. The above statement casts interesting light on a recent comment on his imaging abilities that Dennett made:

And when I do these mental gymnastics I make heavy use of my imagination, exploring various diagrams and pictures in my head, for instance. In short I exploit what Descartes would disparage as merely my imagination to accomplish what he would celebrate as my conception. (Intuition Pumps And Other Tools For Thinking: p. 289)

Here Dennett is claiming that he makes heavy use of mental images to think. However, when placed alongside his views on Shakey it is obvious that he cannot be claiming that he experience mental images. Suppose I see what I think is a ghost on front of me. If it is afterwards shown that what I thought was a ghost is merely a projected image created as a hoax this will affect my beliefs about the nature of what I saw. However, it will not change the fact that I did indeed experience something. The case of Shakey is different, according to Dennett. We and Shakey may sincerely report rotating a mental image in our mind’s eye, however we do not actually rotate the mental object or experience ourselves doing so; we merely report doing so. It is inconceivable that Dennett could subscribe to the view that all we are doing is giving a verbal report of something that we do not experience, if he in fact experienced mental imagery. We suggest that based on Dennett’s views on the nature of imagery, his casual remark about thinking in imagery must be merely metaphorical.

Dennett spends a lot of Hetrophenomenology Revisited arguing that he is neutral about the reality of the descriptions people give of their internal world. However, his manner of characterising people’s reports in terms of a fiction contradicts this supposed neutrality. His discussion of after-images nicely illustrates this point:

Just as the fictional Sherlock Holmes can be correctly described as taller than the real Max Beerbohm, the fictional red stripes on your afterimage can be correctly described by you as somewhat more orange than the real red stripes on the flag flying outside the window. Fiction is parasitic on fact, and the afterimage stripe is red in exactly the same way that Holmes is tall.  (ibid, p.263).

So here we see that Dennett, despite his claims of neutrality is in fact denying the reality of afterimages. We suggest that there are two sources for his scepticism: (1) a theoretical commitment to materialism, and (2) the fact that he is himself non-imager. We will primarily focus on fact number (2).

When criticizing traditional phenomenology Dennett complains that we are unable to agree what is and is not phenomenologically manifest (ibid, p. 261). We suggest that the reason that phenomenologists have not discovered one true description is because different theorists have differently structured minds. However, for the purposes of this paper we will primarily evaluate Dennett’s kind of mind, and compare it with the minds of other kinds of thinkers.  We will examine some of the claims Dennett makes in his book Consciousness Explained to illustrate this point.

In Consciousness Explained Dennett asks his readers to perform the following experiment:

When you close your eyes, imagine, in as much detail as possible, a purple cow.

Done? Now:

(1)   Was your cow facing left or right or head on?

(2)   Was she chewing the cud?

(3)   Was her udder visible to you?

(4)   Was she a relatively pale purple, or deep purple?

(Conscious Explained p. 27)

Dennett speculates that most people would be able to answer this question, and that if they were not able to answer it they probably did not try to call up the image but merely said the words ‘call up a purple cow’ to themselves. This fact seems to indicate that Dennett is himself capable of calling mental images to mind. This is interesting because in his discussion of Shakey the robot he argued that people only have mental images metaphorically, yet in the above discussion of purple cow he seems to admit that he can form mental images. However, a closer look at Dennett’s views shows that he is not in fact claiming to be able to form mental images. Dennett claims that there are many things which we believe we experience but which, upon closer reflection, we do not experience. So, for example, in Consciousness Explained pp. 53-55 he discusses our visual field. He notes that from the point of view of naive reflection our visual field seems to be uniformly detailed from the centre out to the boundaries. However he asks the subjects to perform an experiment of trying to distinguish cards at the periphery of one’s visual field. People always fail this test. Dennett takes this fact to establish that while it may seem to people that they have certain experiences, experimental studies can reveal otherwise.  He views the case of imagery as similar to the case of the visual field. It often seems to people that they have experiences of mental images; however a closer inspection reveals this to be wrong. In his Content and Consciousness Dennett speaks of a mental image of a Tiger. He notes the following:

Consider the Tiger and his Stripes. I can dream, imagine or see a striped tiger, but must the tiger I experience have a particular number of stripes? If seeing or imagining is having a mental image, then the image of a tiger must- obeying the rules of imaging in general-reveal a definite number of stripes showing, and one must be able to pin this down with such questions as ‘more than ten?’, ‘less than twenty?’. If, however, seeing or imagining has a descriptional character, the question needs no definite answer. Unlike a snap shot of a tiger, a description of a tiger need not go into the number of stripes at all; ‘numerous stripes’ may be all the description says. (Dennett: Content and Consciousness p.154).

Dennett’s argument is that people cannot count the stripes of supposed mental images of tigers. Therefore purported mental images just seem to be images and upon closer inspection they are revealed as mere descriptions of a tiger[3].  Dennett is doing a similar thing when asking people to call up a mental image of a cow. He admits that it seems like we can do so, however he asks us further questions which aim to cast doubt on the reliability of our introspections.

Dennett follows up his question about an imagined purple cow by asking us to imagine a yellow cow. He asks us to answer the same four questions about the mental image of the yellow cow. He then asks a fifth question: (5) What is the difference between imagining a yellow cow and a purple cow? Dennett argues that since nothing in the brain is purple or yellow, and even if it were we have no internal eyes to see such creatures, then such claims must be mere seemings rather than something we actually experience. Some philosophers have argued that since we can experience a yellow or a purple cow in our mind’s eye and these experiences by Leibniz’s law cannot be identical with brain states which are neither yellow or purple, and they are not events in the external world, then they must be in a different realm, the realm of the non-material mind. Dennett’s discussion of Shakey above is his way of doing away with this problem. The thought experiment of Shakey is designed to show a creature who can report on mental states which it does not experience. Dennett argues that as a matter of empirical fact we are such creatures. We will argue on the contrary that Dennett is, as Nietzsche says, giving an unconscious autobiography of his own mental type. We will develop this point further when discussing Dennett’s take on Kosslyn’s experiment on mental rotation later in this paper.

Later in Consciousness Explained Dennett makes further claims about introspection which give us a deeper insight into the type of mind he has. He notes  the strange fact that since Descartes’ time, philosophers have been trying to describe the basic facts of their internal experiences in as clear a manner as possible, and yet have no clear results which all agree on. He considers briefly the possibility which we are developing here that people have different types of mind and that they are accurately describing their own types of experiences. But he never really considers any evidence for or against this possibility. Instead he explains the lack of agreed upon introspective results in terms of the fact that our internal world is so vague that over-theorising our vague internal experiences will result in us unwittingly creating fictional internal worlds. It is important to note that Dennett does not offer much evidence to support his view that introspection is theorising on vague, practically non-existent internal world. He points out that some experiments prove that people are wrong about aspects of their perceptual experiences which they have made confident assertions on. However, the fact that people can be wrong about their perceptual experiences does not really prove anything about their introspective experiences. It does show that people can at times be gullible theorisers, but what this fact points to is that we should be careful when introspecting and perceiving, not that introspection and perception are bad tools.

So why does Dennett deny that the reason people have different descriptions of their internal worlds is because they have different types of minds? We argue that he is guilty of committing the Typical Mind Fallacy. Dennett makes the following claim:

I suspect that when we claim to be just using our powers of inner observation, we are always actually engaging in a sort of impromptu theorising-and we are remarkably gullible theorizers, precisely because there is so little to ‘‘observe’’ and so much to pontificate about without fear of contradiction. (ibid, p.68)

The key claim to note from above is Dennett’s claim that ‘there is so little to observe’ when we introspect.  We argue that we should take Dennett at his word here. When he introspects there are no images to observe because he, like 5% of the population, is a non-imager.  Both of the authors of this paper report that they find little to observe when they introspect. When they introspect they experience virtually no mental images; almost all thought is in terms of internal talking, although from time to time they do experience some mental images. So from an introspective point of view they can understand Dennett’s more extreme position, that of having no mental images. We can see why Dennett would believe that introspection involves a kind of unconscious theorising. The important point is that while Dennett and David Berman and David King are drawing from their own experiences, the reports which have been gathered from introspective psychologists indicate that not everybody shares the same experiences as Dennett. We argue, and will establish empirically, that people who are eidetic imagers will find Dennett’s claims absurd. They are right to view Dennett’s claims absurd from the point of view of their own minds. However, they would be wrong to think that Dennett is incorrect about his own mind; Dennett’s mistake was unconsciously generalising from his own mind to all minds. We will illustrate in the next section that Dennett’s views on mental imaging are not only derived from his own experiences but are also derived from theoretical considerations.

Section 3: Dennett on Rotating Mental Images

In chapter 10 of Consciousness Explained Dennett discusses some experiments which have been done on the rotating of mental images. The experimental data he discusses is extremely interesting. The first experiment is noteworthy because one of the authors (Dave King) was unable to perform the experiment at all because of a difficulty in calling up mental images. In the experiment subjects are asked to check if two objects beside each other but at different angles are the same shape.

The subjects typically reply that they are. When asked how they know, they reply that they rotated one of the objects in their mind’s eye and superimposed it on the other. Now obviously this could be explained away as being just a way of talking which the subjects have. However, further experimental research casts doubt on this view. Subjects were asked to rotate an object which was at a 45-degree angle to an object beside it to see if they were the same size and shape. Subjects were then asked to rotate an object at a 90-degree angle to the object beside it to see if they were the same size and shape. The idea was that, assuming that subjects rotated the objects in their mind’s eye at the same speed, then this would take twice as long. The experimenters (Kosslyn, 1980) found that it did indeed take twice as long to rotate an object through a 90-degree angle as through a 45-degree angle. This would appear to show that some people do indeed think with mental images.

Dennett is primarily worried about explaining these facts in a manner which does not commit him to the existence of a Cartesian Theatre. He gives a theoretical account of how we can explain this experiment in terms that do not presuppose the existence of the Cartesian Theatre.  When discussing the human ability to rotate mental images Dennett compares it with the ability of a CAD system to rotate various shapes because of its Cathode Ray Tube. He asks us to imagine a blind engineer using a CAD system as a prosthetic device to rotate images. The blind person uses braille to communicate with the CAD system.  This CAD system which he calls Mark 1 CADBLIND will be fitted with a computer vision system, complete with TV camera aimed at the Cathode Ray Tube. This computer visual system will be able to read the shapes on the Cathode Ray Tube and communicate the results to the blind engineer.  Dennett goes on to argue that if we can design a Mark 1 CADBLIND system, then developing a Mark 2 CADBLIND will be easy:

We just throw away the CRT and the TV camera looking at it, and replace it with a simple cable. Through this cable the CAD system sends the Vorsetzer a bit-map, the array of zeros and ones that defines the image on the CRT. (ibid p 291)

So after a CAD system performs its calculation it passes the information to the Vorsetzer (visual system) and the information is translated. Dennett notes that we are not really saving  much in terms of the calculations which still need to be done, we are just getting rid of some unnecessary hardware. He therefore proposes a Mark 3 CADBLIND system which actually saves on calculation:

So our Mark 3 CADBLIND will exempt itself from huge computational tasks of image-rendering by taking much of what it “knows” about the represented objects and passing it on to the Vorsetzer subsystem directly, using the format of simple codes for properties, and attaching “labels” to various “places” on the bit-map array, which is thereby turned from a pure image to something like a diagram. Some spatial properties are represented directly-shown-in the (virtual) space of the bit map, but others are only told about by labels. (ibid p293).

Here Dennett notes that such cancelling out only works if the two systems that need to communicate speak the same language. There would be a problem if the information which the CAD system had was not in a format that the Vorsetzer could use. Dennett argues that given that the human brain was designed by the tinkering processes of natural selection we should expect to find such difficulties of communication between the different areas of the brain. He speculates that since diagramming is an effective way of communicating the brain may use such processes. He argues as follows: ‘Diagrams do indeed amount to re-presentations of the information-not to the inner eye, but to the inner pattern recognition mechanism that can also accept input from the outer eye (ibid, p.293).

Twenty years after Dennett wrote Consciousness Explained experiments by Kosslyn have confirmed that two-thirds of the brain area that is lit up when a person visualises (the occipital lobe) is also lit up when a person forms a mental image.

Not everyone accepts Kosslyn’s results (see for example Pylyshyn 2003), however a substantial majority do. It has furthermore been shown that when people form mental images  geometric patterns of these images are formed in the brain (Kosslyn et al. 2006). Experimental work has also been done on monkeys which shows that topographic images are formed on their visual cortex when they view patterns of lights which are flashed to them by experimenters. (Kosslyn et al., 2006)

Notably Dennett, despite arguing for an architecture similar to Kosslyn’s, draws a different conclusion. Kosslyn believed that these experiments showed that people rotated such images in their mind’s eye. Dennett, however, drew an entirely different conclusion:

Or in other words, it is asking whether any of the pixel-demons want to tell it, “color number 37 here”. All the red is gone-there are only numbers in there. In the end, all the work in a CADBLIND system must be done by arithmetic operations on bit strings, just as we saw at the lowest level of Shakey in Chapter 4. And no matter how quasi-pictorial or imagistic the processes are that eventuate in the Vorsetzer’s verbal answers to questions, they will not be generated in an inner place where the lost properties (the properties merely “talked about” in the bit map) are somehow restored in order to be appreciated by a judge that composes the answers…People are not CADBLIND systems. The fact that a CADBLIND system can manipulate and inspect its “mental Images” without the benefit of a Cartesian Theatre doesn’t by itself prove that there is no Cartesian Theatre in the human brain, but it does prove that we don’t have to postulate a Cartesian Theatre to explain the human talent for solving problems in “the mind’s eye”. There are indeed processes that are strongly analogous to observation, but when we strip down Kosslyn’s CRT metaphor to its essentials we remove the very features that would call for a Cartesian Theatre.  (ibid, p. 297)

Dennett is in effect arguing that while it may seem to some people that they are forming mental images in their mind’s eye, closer inspection reveals that they are doing no such thing. He is trying to explain away people’s verbal statement that they are forming mental images. Most people find Dennett’s view on this topic a bit strange. They can understand how Dennett can treat other people’s verbal reports using the heterophenomenological methodology. Such an approach treats people’s verbal reports as being epistemically on a par with the verbal reports of Shakey the robot. However, the problem which most people have with Dennett’s characterisation of mental images comes from the first-person point of view. Independent of my verbal reports to others, if I see a purple cow in my mind’s eye, then I see it. Consider by analogy a table that I see on front of me. It could be argued that despite appearances the table is really just a colourless collection of quarks and gluons. Nonetheless, despite this fact, I still see a table on front of me. Likewise, while my mental image of a purple cow may be caused by messages to my occipital lobe from the temporal lobe, what I see is a purple cow. Dennett’s denial of mental images is only explicable from a first-person point of view if he is in fact incapable of forming such images. If he experienced vivid images then surely he would find his own CAD, and Shakey thought experiments, less impressive than he does.

He goes on to ask us to perform the following introspective experiment:

Here is a simple test to remind us of how limited our imaging abilities actually are: In your mind’s eye, fill in the following three by three crossword puzzle, writing the following three words in the columns, starting with the left column: GAS OIL DRY. (ibid p.295)

He notes that if written on a page the words would pop out whereas in the mind’s eye they do not. Here he is guilty of the typical mind fallacy. He assumes that because he cannot do it, nobody can. He admits there may be variation in people’s abilities to do crossword puzzles in their mind’s eye, and even says that some people may be better imagers than others. However, in the next sentence Dennett reminds us that in the ultimate sense it is all tell and no show (ibid, p. 295). In other words, we do not really use mental images to solve the crossword puzzles. So here again we suggest that the fact that Dennett is a non-imager plays a massive role in the theoretical views he holds about the nature of mental imagery.

Oliver Sacks recently spoke about a patient of his doing crosswords in her mind’s eye. I will give the whole quote here, because it strongly contrasts with Dennett’s view:

I was reminded, when Lillian told me of this, of a patient I had seen in the hospital some years before, who overnight become totally paralyzed from a spinal cord infection, a fulminating myelitis. When it became evident that no recovery was forthcoming she fell into despair, felt that her life was over-not only the great things of life but the little familiar pleasures of each day, like doing the New York Times crosswords, to which she was addicted. She requested that the Times be brought to her each day so that she could at least look at the puzzle, get the configuration, run her eyes along the puzzle, get its configuration, run her eyes along the clues. But when she did this something extraordinary happened, for as she looked at the clues, the answers seemed to write themselves in the spaces. Her visual imagery strengthened over the next few weeks, until she found that she was able to hold the entire crossword and its clues in her mind, after a single, tense inspection, and then solve it mentally, at her leisure later in the day. This became a source of great solace to her, in her paralysis; she had no idea, she told later told me, that such powers of memory and imagery were available to her. (Oliver Sacks: The Mind’s Eye p. 24)

This example is obviously anecdotal, and a verbal report cannot be simply assumed to be a correct description of an experience. Nonetheless, since introspective reports, behavioural evidence and neuroscientific evidence do indicate that people have different ranges in their imaging abilities, we have no reason to doubt Sack’s patient.  We suggest that Dennett could benefit from reflecting on the variation of people’s mental imaging and that such reflection is necessary to stop him from unconsciously projecting his own type of mind onto all other people’s minds. One of the authors of the present paper (Dave King) tried Dennett’s introspective task of imaging crosswords. He described his experiences as follows:

When trying to imagine the crossword in my mind visual images played a limited role. I spelled the words out ‘’verbally’’ in my mind, and sometimes could see a glimpsy word in my mind’s eye. And I figured out what the words were by a process of memory, spelling out each word ‘‘verbally’’and then through memory figuring out what each horizontal word would be. Some imagery was present, I vaguely ‘‘saw’’ a square and red dots appear where I would imagine the words would be if in a crossword. Looking at each red dot seemed a physical experience; I could feel my eye balls move as I looked at the dots. The images did not seem to play any role in figuring out the words though as I said, images were present a lot of the time.

In an earlier paper co-author of this paper David Berman described his own experiences of trying to form a mental picture of a similar game:

Here is another similar test for image impairment. I say: let’s play noughts and crosses. You agree. But instead of your playing it on paper, I ask you to play it in your head or mind’s eye, using the a, b, c and 1, 2, 3 grid, as used by chess players. Here again the weak or non-imager is going to show his impairment; whereas the photographic and eidetic imager 10 could not only easily play noughts and crosses in his head, but (amazingly to me) actual chess as well. Of course, working out the place of a letter in a word, or playing mental noughts and crosses, is not a task that comes up that often. But what does arise rather more normally is a question like:‘Dad, how do you spell the word . . . ?’ For a low imager, that isn’t difficult if the word is short. But it is extremely difficult, as I know from my own experience, if the word is long, for then it is hard to be sure which letter one has reached by the middle or end of the word. Again, it is easy enough to do it if you can write the word down, and that is precisely what the strong imager can do, although in his mind’s eye. (Philosophical Counselling for Philosophers: p. 7)

 

The views of Dennett, Sacks’ patient, David Berman and Dave King are all different. Dennett, because he is, we believe, a non-imager, assumes that the differences result from theorising on our limited vague internal experiences.  We argue that he believes this because when he introspects he experiences no mental imagery and incorrectly assumes that all people have minds which are the same as his.

                    Section 4: Dennett as a  Heterophenomenological Subject.

Dennett discusses pain in Chapter 3 of Consciousness Explained; the chapter is called ‘A visit to the phenomenological garden’. When discussing pain Dennett asks us to perform the following introspective experiment:

Close your eyes and imagine that someone has just kicked you, very hard, in the left shin (about a foot above your foot) with a steel-toed boot. Imagine you almost faint, so nauseatingly sharp and overwhelming is the jolt of pain you feel. You imagined it vividly; did you feel any pain? Might you justifiably complain to me that following my directions has caused you some pain? Some find it somewhat disturbing, and others find it a rather enjoyable exercise of the mind, certainly not as unpleasant as the gentlest pinch on the arm that you could call pain. (Consciousness Explained, p. 60)

Dennett then asks the following two questions:

(1) Imagine you dreamt of experiencing the same kick in the shin that you imagined in your introspective experiment, and you woke up clutching your shin.  Are dreamed pains real pains, or a sort of imagined pain?

Dennett’s answer is that dream pains are at least something that we do not like having, whereas imagined pains are something nobody minds having. He then asks another question:

(2) Compare dream pains to pains that arise in you while you sleep, when you roll over and inadvertently twist your arms in an awkward position, and then, without waking up, without noticing it at all, roll back into a more comfortable position. Are these pains? Dennett is ambivalent in his answer to this question. He only seems to ask the question to illustrate how unclear the notion of pain is; that despite some philosophers thinking that qualia like pain are immediately known, the experience is in fact much more vague than we think.

To support our view that Dennett is a non-imager inadvertently committing the Typical Mind Fallacy we asked 12 people to do his introspective task. We aimed for a qualitative approach; we asked our participants to detail their experiences in as rich a manner as possible. We argue that such an approach is in keeping with the manner Dennett answered his own question. We sent our introspective tests to over 40 people and used the results from the 12 people who answered. Our aim is to merely illustrate the divergences which occur within our small sample. We think that the divergences are suggestive and warrant being further researched on a large scale and with more rigorous statistical analysis which takes account of age differences, sex differences, and cultural differences, educational level etc. For the purpose of this paper we argue that our small sample is enough to show that Dennett is wrong to generalise from his own type of mind to the structure of all minds.

Dennett’s manner of answering his own question combined with other studies reveals a lot about his kind of mind. He argues that imagined pain is something that people do not mind having, while dreamed pain is something people do mind having. Most of the people who answered our questionnaire did not agree with him on this. We also think that Dennett’s account reveals that his dream experiences are more real for him than his imagined experiences. This is significant because one of the authors of this paper, David Berman, has evidence that non-imagers typically have vivid dream experiences[4]. This fact was also observed by the psychologist Bill Faw[5]. So the fact that Dennett admits by implication that his dreams are more vivid and real than his imagination fits well with our claim that he is a non-imager.

So we argue there are four key facts which indicate that Dennett is a non-imager who is guilty of unconsciously assuming that all people have the same mental architecture as himself. (1) There is evidence of great philosophers making this mistake in the past[6]. This obviously does not prove that Dennett is guilty of making the same mistake; it is merely intended to establish historical precedence and to open up the theoretical possibility that Dennett may be guilty of the same mistake. (2) The fact that Dennett describes himself in terms of Shakey and Mark 3 CADBLIND, creatures who have no imagery whatsoever indicates that he, like them, has no mental imagery. (3) The study of Reisberg et al.(2002) indicates that non-imagers typically remain sceptics about mental imagery ability despite the overwhelming evidence in favour of the existence of mental imagery.(4) Dennett’s confession that he finds dreams pains more vivid than imagined pains is consistent with the findings of Faw (1997) and Berman (2009) that non-imagers typically experience vivid dreams.

                             

 

                       

 

                                     Section 5: Dennett: Are Dreams Experiences?

The psychologist Brian Faw has argued that typically people who are non-imagers experience vivid dreams. A similar pattern has been noted by David Berman in studies of people’s introspective abilities over the years. In his 2009 “A Manual of Experimental Philosophy” two of his subjects were non-imagers and both of them reported having strong dream experiences. One of his subjects Timo made the following reports to Berman:

I must confess that I cannot see any images in my mind, except when I fall asleep. But I can for instance freely wander around my childhood home in Vaasa, from room to room, and I can ‘see’ quite exactly what was there. I go back to that flat. But I do not have visual images which are comparable with those of dreams. I just remember’… I suspect that people call the pictures of imagination pictures even if they do not see any pictures. They conventionally call them so…David, believe me, I don’t see any pictures. I do the construction conceptually, and I remember what I thought in the (black) visual space. When I walk through my home, I use words. Believe me I see nothing except in some wide figurative sense perhaps. But no images. It is all about thinking not seeing. (A Manual of Experimental Philosophy: pp.63-66)

We can see from Timo’s report that his view of images as a type of metaphor is similar to Dennett’s view. Like Dennett, his report of dreams indicates experiences much more vivid than his non-imaging  abilities when awake. The other non-imager in Berman’s report made similar claims. Marek, who reported that he is a non-imager made the following claim about his dream experiences:

 

Yes I do have visual dreams, in fact the images (plus voices, of course) are so vivid that I try and do interact with the people in the dreams. So like Descartes predicted, I am fooled in my dreams and take them for reality. Sometimes though I can tell that I am dreaming when something very illogical happens. (ibid p. 70).

 

We have argued in this paper that Dennett was a non-imager and that like a lot of non-imagers he had vivid dream experiences. The evidence we cited above from Berman and Faw indicates that there is a correlation between non-imagery and strong dream experiences. We have also argued that Dennett’s discussion of the difference between imagined pain and dreamed pain further illustrated our point. He claimed that dreamed pain would be more vivid than imagined pain and this claim nicely fitted in with our empirical data that non-imagers have vivid dream experiences[7].

There are, however, two objections which can be made to our claim. (1) Our claim that Dennett has vivid dream experiences is inconsistent with our claim that he denies that mental images exist. If Dennett does indeed experience mental images while dreaming then this seems to undermine our claim that he denies the existence of images because he never experiences them. (2) Dennett has written two different papers about the nature of dream experiences, and his views on dream experiences do not, on the face of it, sit well with our view that Dennett has strong dream experiences. We will deal with Dennett’s theoretical papers on dreams first, and then after that answer question (1).

In Dennett (1978b) he aims to undermine the authority of the received view that dreams are experiences. The received view claims that dreams are experiences that occur during sleep,  and that these experiences consist of sensations, thoughts, images and feelings which are organised in a narrative form occurring somehow in awareness even though we are not consciously aware at the time. Dennett aims to cast doubt on the received view and to outline his view of experience and memory which he will incorporate into his physicalistic theory of consciousness.

Dennett notes that the received view is an empirical theory; and that there is some anecdotal evidence which goes against the received view. For example, people are often woken by an alarm which they hear in their dream; however they were undergoing REM hours before. So unless the dreamer has precognition then he cannot have dreamt of the alarm at the said time. Dennett claims that this shows that dreams are mere recollections, not something we actually experience. Dennett conjectures that science may show that dreams are like déjà vu, i.e. it only seems like we had them before (Dennett: Are Dreams Experiences p. 155). We may, he speculates, have dream memories constructed in our brains ready to be ‘recollected’ upon waking.

Dennett argues that the difference between the received view and his alternative (cassette) conjecture is that in the new view, unlike the old view, there is nothing it is like to have a dream (though there is something it is like to remember a dream). For Dennett there is nothing to distinguish between the received view and the cassette alternative. He notes that only our common sense intuition counts against his alternative, although he argyes that common sense intuitions should not decide a scientific matter.

Dennett points out that one criterion which counts against the received view is ordinary language. We would normally say that to experience something we must be conscious of it. On the received view we supposedly have unconscious experiences. So this is reason to prefer the alternative view because it is less vulnerable to conceptual confusion than the previous view. He offers a further thought experiment which he claims shows that dreams may not be experiences. Suppose a person as a result of some brain twinge claims to have seen a ghost at 12pm. Suppose further that we are with the person at 12pm and between 12pm and 12.14pm he behaves normally and gives no indication of seeing a ghost. Now suppose that at 12 15pm this person gets agitated and claims that he has seen a ghost at 12pm. Since the person gave no indication of this at 12pm we will assume that the person at 12 15pm, as a result of brain injury, has a hallucination of a recollection (ibid, p.20). Dennett claims that this hallucination story can have effects for the theory of dreams. All normal dreams do not get reported until after the fact. So we do not have sufficient evidence to say people are experiencing dreams. He makes an exception when it comes to nightmares because we have behavioural evidence in the form of increased heart rate, moans, cries. He claims that bad dreams would not be experiences because of lack of behavioural evidence at the time we claimed we experienced them, though he claims we could remember the dream in agony (ibid p.20).

He argues quite plausibly that if paralysis is only peripheral and more central processes are lit up, e.g. the visual system is lit up (which it is), this may count towards us saying that dreams are experiences. However, given that the visual area is lit up in blind sight and hysterical blindness this shows that we cannot say for certain that a lit up visual area means that we are having dream experiences. He notes that it is an open theoretical question whether dreams are experiences.

He argues that we will want to investigate four things if we are to construct an accurate theory of dreams.

(1)   The role of experiencing in guiding current behaviour.

(2)   Our current abilities to say what we are experiencing.

(3)   Our recollective capacities to say what we are experiencing.

(4)   The functional saliencies which emerge from empirical investigation.

So in effect Dennett argues that we cannot at present decide between the alternatives however, any alternative must be established empirically.

In his later paper “What is Dreaming for, if anything?” (2013) Dennett defends this Malcolm-type view on dreams against an alternative proposed by psychiatrist Allan Hobson whose theory of the nature of dreams is a version of the received view. Dennett again offers a series of arguments which he thinks show that his view is as viable an alternative as the received view. The one area where he expands upon his earlier theory is by emphasising the theoretical possibility that linguistic competence plays a central role in developing our hallucinations of dream experiences.

It might be felt that Dennett’s view outlined in his two papers on dreams which indicates that he does not think that dreams are experiences is strong evidence that our view that he is a vivid dreamer is incorrect.  We, on the contrary, think that Dennett’s papers on dreams support our position rather than going against it.

It is worth noting that the empirical evidence that non-imagers have vivid dreams seems on the face of it to be a strange fact. If a person is incapable of having mental images then it is odd that they can have vivid dreams which would involve imagery. However, from a phenomenological point of view mental images are different than hallucinations. People never take mental images for the real thing (even eidetic imagers do not), whereas people typically confuse hallucinations for real experiences. The fact that dreams are typically confused with reality indicates that they have the quality of a hallucination rather than of imaging. Dennett, while claiming that dreams may not be directly experienced, does indicate that we experience our memories of them and that they are hallucinations of recollections of something we never experienced (ibid, p.20). But we experience the actual hallucination, and the hallucination of a bad dream can be remembered in agony (ibid, p.20). What this means is that although the hallucination is a false memory we still experience the actual hallucination and the hallucination can leave us in agony. So again this view is actually consistent with Dennett’s claim that imagined pain is not as severe as dreamed pain. We will not evaluate Dennett’s defence of Malcolm’s claim that dreams are not directly experienced because it is not relevant to this paper and Dennett admits it is only a theoretical possibility. It is enough to note that nothing in Dennett’s paper is inconsistent with our claims that Dennett is a non-imager who has vivid dream experiences. Dennett, even in his paper’s on dreams, indicates that even if we consider dreams as hallucinated memories we can still experience these hallucinations in agony. This contrasts strongly with his claim that people cannot imagine real pain in his Consciousness Explained. It is consistent with Dennett’s claim that dreamed pain is something we mind having. It is also consistent with our view that Dennett is a non-imager who is a vivid dreamer. Dennett is guilty of assuming that his idiosyncratic psychology is representative of human minds. He is not alone in this. Berman has argued convincingly that Locke and Berkeley were also guilty of it. We think it is worth researching whether other thinkers in the philosophy of mind like Nagel and McGinn are guilty of making a similar mistake. Philosophers like Dennett and Quine have argued that philosophy should become more naturalistic and empirical. We aim to extend this empiricism to the study of the individual minds of philosophers and how their minds unconsciously influence their philosophical theories.

Section 6: Real Seeming

In his Content and Consciousness Dennett talks about the fact that while it may seem to some people that they have mental imagery, closer examination reveals that what they call a mental image is really only a description. Twenty five years later in his Consciousness Explained when discussing Kosslyn’s experiments on mental imagery, Dennett noted that despite appearances mental imagery is really all tell and no show. One curious thing about Dennett’s view is the fact that he claims that despite the way things seem mental imagery is really a mental description. What is strange about this view is the fact that a description can seem like an image. This is a very odd way to understand the word ‘seem’. A paradigm example of an x seeming like it is a y is given by Descartes. He talks about how a stick which is in transparent water will seem to be bent because of light refraction, though in reality the stick is not bent. What Descartes means by the words ‘seems to be’ is the same as ‘appears to be’; and this of course is the standard meaning of ‘seems to be’. However, even to a weak imager like me, it is patently obvious that mental images are nothing like mental descriptions. If something really seemed (as in appeared) to me like an image, then it follows that I would have an experience of something image-like, and a description is in no way image-like. This leads to the question of what Dennett could possibly mean when he admits that it atleast seems to some people that they experience mental images?

In Consciousness Explained Dennett carefully explains what he means by the word ‘seems’; evaluating his views on this will help clarify his strange beliefs about the nature of images. In Chapter 5 Section 5 of Consciousness Explained Dennett discusses the colour phi experiment[8]. In this discussion he makes explicit his strange views on the nature of ‘seeming’.  The colour phi phenomenon is apparent motion. We see examples of it on our television screen every day, where a series of still pictures are flashed one after the other at a certain speed to create the illusion of motion. Dennett discusses a simple example of colour phi where two spots separated by as much as 4 degrees of visual angle are flashed rapidly, creating the illusion of one spot moving back and forth rapidly (ibid, p.114). Kolers and Grunau (1976) did a phi experiment with two dots, one red and one green,  flashing on and off. This gave the illusion of a red spot starting to move and changing colour to green mid-passage. Since the red dot is not moving and does not turn into a green dot we need to ask what is going on with this illusion. As the red dot moves we see it turn green as it moves towards its destination. The question is: how do we see the dot turn green before we actually see the green dot? One might think that the mind must first register the red dot and then the green dot and after that point the apparent motion must be played before the mind’s eye. To think this, way Dennett warns, is to demonstrate that one is still in the grip of the metaphor of the Cartesian Theatre (ibid p. 115).

To loosen the grip of this picture on our minds Dennett discusses two fictional processes which one could attribute to the brain. He calls them the Orwellian Process and the Stalinesque Process.  The Orwellian Process occurs when I misremember something because my brain tampers with what I remember so that I no longer remember accurately. The Stalinesque Process is where the brain projects a false picture of reality into the mind’s eye. Dennett notes that while a distinction between Orwellian and Stalinesque processes makes sense in the external world it is an illusion to assume that it makes sense as an explanation of what is going on at the level of the brain.

Let us think of both of these processes as they apply to the case of colour phi.  In the Orwellian case we did not see the apparent motion; our brain merely revised our memory and informed us that we did see the motion. In the Stalinesque case we unconsciously registered the two dots and afterwards our brain created a kind of mock event for us to watch. Dennett notes that once we give up the notion of Cartesian Materialism, we will see that there is no answer to the question of whether the Orwellian or Stalinesque process took place. He puts things as follows:

So here is the rub: We have two different models of what happens to the color phi phenomenon. One posits a Stalinesque “filling in” on the upward, pre-experiential path, and the other posits an Orwellian “memory revision” on the downward, post-experiential path, and both of them are consistent with whatever the subject says or thinks or remembers…Both models can deftly account for all the data-not just the data we already have, but the data we can imagine getting in the future (ibid, pp. 123-124)

So there is no fact of the matter which can decide between the two different stories. Dennett argues that the reason that we cannot decide between the two accounts is that there is really only a verbal difference between them. With Dennett’s rejection of Cartesian Materialism and his alternative multiple-drafts theory of consciousness we can no longer draw a non-arbitrary line to decide when an event becomes conscious.  There is therefore no fact of the matter as to whether an Orwellian or a Stalinesque process took place.

When Dennett claims that we cannot decide between the Stalinesque and Orwellian alternatives we are left with what seems like a mystery. In the external world a red object is not really moving and turning into a green object, yet Dennett is also denying that a Stalinesque show trial is played before the mind’s eye.  So the obvious question is: where does the movement of the ball occur? Dennett’s answer is that the ball does not move and turn green it only seems to. However, to say that a ball seems to move is to say that people have an experience of the ball moving. And this leads us back to our original question: what generates this experience, and how is it generated? Dennett thinks that this is a bad question because the brain does not need to create an experience of the ball moving; it merely has to form a judgment that such movement occurred:

The Multiple Drafts model agrees with Goodman that retrospectively the brain creates the content (the judgment) that there was intervening motion, and that this content is then available to govern activity and leave its mark on memory. But the Multiple Drafts model goes on to claim that the brain does not bother “filling in” the blanks. That would be a waste of time and (shall we say?) paint. The judgment is already in, so the brain can get on with other tasks. (ibid, p. 129)

This claim of Dennett’s is extremely strange. He is claiming that the brain judges that the motion occurred. However, as a matter of fact, we do not experience the motion; we only think we do.  The obvious reply to this is to categorically state that I do experience the movement and I judge that the movement occurred based on this experience. In other words, the experience is prior to the judgment. The experience is not of a fact in the external world (where no movement occurred), it is rather an experience of a person’s subjective qualia. When Dennett denies that it is the experience that leads to the judgment, he is leaving the phenomenal experience out and is focusing entirely on access consciousness.

The claim that Dennett is denying the existence of phenomenal consciousness is on the face of it an incredible claim. So before proceeding it is important that we show that this is indeed Dennett’s position. To demonstrate that this is indeed Dennett’s position I will now provide detailed quotes from him to make clear his position.  When discussing phenomenal space Dennett makes the following claim:

Now what is phenomenal space? Is it a physical space inside the brain? Is it the onstage space in the theatre of consciousness located in the brain? Not literally. But metaphorically? In the previous chapter we saw a way of making sense of such metaphorical spaces, in the example of the “mental images” that Shakey manipulated. In a strict but metaphorical sense, Shakey drew shapes in space, paid attention to particular points in that space. But the space was only a logical space. It was like the space of Sherlock Holmes’s London, a space of a fictional world, but a fictional world systematically anchored to actual physical events going on in the ordinary space of Shakey’s “brain”. If we took Shakey’s utterances as expressions of his “beliefs”, then we could say that it was a space Shakey believed in, but that did not make it real, any more than someone’s belief in Feenoman would make Feenoman real. Both are merely intentional objects. (Ibid, pp. 130-131)

The above passage is very instructive. It speaks to our topic of mental images and again shows that Dennett thinks of them as theorists’ fictions. Furthermore, his invoking of Shakey, who despite its verbal reports is not experiencing any items in phenomenal space, shows that Dennett thinks that we, like Shakey, despite our verbal reports are not experiencing anything in phenomenal space. Dennett is claiming that our brains may tell us that we have such and such experiences, and as a result of this brain report we form the judgment that we saw a red light move and turn into a green light. However, this judgment, despite appearances, is not grounded in a phenomenal experience.

It is worth noting that a lot of thinkers misinterpret Dennett’s claims on ‘seeming’ and colour-phi as indicating that he denies that we experience colours. This is not the case. Dennett’s arguments above only apply to colour hallucinations, Dennett tells a different story about how we perceive colour in the external world.

To understand Dennett’s views on colours it is helpful to think in terms of the primary quality distinction. One of the central motivations for claiming that the world is in your head is the existence of secondary qualities. When one looks at a beautiful garden one sees a variety of different colours: like the bright yellow sun-flower, the green-grass, the multi-coloured butterflies, the blue-sky and the bright-yellow-sun. Since the seventeenth century people like Galileo and Locke have been telling us that colours do not exist in the mind independent world. Colours are effects of light reflecting off objects and hitting our retinas in a variety of different ways. The majority of scientists since Galileo accept this dichotomy between primary and secondary qualities. Primary qualities are: Solidity, Extension, Motion, Number and Figure, while the Secondary qualities are: Colour, Taste, Smell, and heard Sounds. One consequence of accepting this picture is that the world is not as it reveals itself to us in our experience furthermore colours do not exist in a mind independent world. A further consequence is that we have a rich world which we experience consisting of taste, smells and colours but this world exists only within our minds. So on this view we have a subject, who is presented with certain experiences, and only some of those experiences correspond with a mind independent entity. The cartesian materialist who accepts this world picture has a difficult job on his hands. Nowhere in the brain is the experience of a blue sky located or a yellow daffodil located. He may be able to provide neural correlates for these experiences but he will not be able to point to the spatio-temporal location where the experience is and  the subject is located. So presumably the cartesian materialist will have to argue for a strong emergence thesis.

Rather than going down this road Dennett interprets the dichotomy between primary and secondary qualities differently than most contemporary theorists. Dennett has discussed the status of colours throughout his philosophical development: in particular in his 1968 Content and Consciousness, 1991 Consciousness Explained and in his 2005 Sweet Dreams: Philosophical Objections to a Science of Consciousness. I will now try to give a synoptic view of Dennett’s views on the topic of colours. In his first major discussion of colours he noted that while most believed that colours are secondary qualities and do not exist in the external world there are reasons to doubt this conclusion.

He centres his criticism in terms of language and what we are referring to when we use colour words. If we view colours as secondary qualities we are committed to the view that when I refer to something red I am referring to something within my mind. Now if we accept this view then when two people claim that they are referring to something red then they we don’t know whether they are referring to the same thing, as their inner experience of red may be different and, we cannot decide because we would have nothing public to compare their experiences to. Now if we do not want to admit the possibility that a teacher can never know when his pupil has actually learned the meaning of the word ‘red’ we must admit that the reference of colour words is to public observable entities.

One difficulty is that if one accepts the solution to the sceptical problem of colour reference by arguing that words refer to publically observable entities is that it leaves us with a conundrum of where we say that colours exist. They don’t exist in the mind independent world, and they don’t exist in the mind, and there is nowhere else to exist. So one is lead to the silly conclusion that colours do not exist anywhere. This conclusion must be wrong; and Dennett correctly notes that colour words refer to publically observable entities that one can be right or wrong about (Content and Consciousness, p. 161). So they seem to exist and seem to exist in a publically observable sphere.

For Dennett since colours are publically observable entities which we can be right or wrong about then they must be a property of the external world. This leaves Dennett with the question: what property exactly are they? He notes that colours are not reflective properties of surfaces which can be cashed out at a sub-atomic level. This is because:

“Although the sub-atomic characteristics of surfaces that reflect light predominantly of one wavelength can now be described in some detail, those different types of surface do not correspond neatly to the colours we observe things to be.” (ibid, p. 161)

Also different wavelengths of reflected light can cause the same colour experience in a person. So the job of characterising what property colours actually are is more complex than one might assume. Dennett notes that when a person is referring to red we will need to cash-out what property they are referring to in terms like: The person is referring to the reflexive property of x or y or z…(and the disjunction associated with one colour might be very long).

Dennett asks us: if the disjunction of properties which can be associated with a person’s experience of colour have little in common with each other, are we driven to the conclusion that colours do not exist? To think through this possibility considers colour blind people who have poorer discriminative capacities than us and a hypothetical alien who has colour discriminative capacities which are greater than ours. He notes that we would say that the colour blind man who may say that ripe (red for us) apples and grass are the same colour is suffering from a cognitive illusion. On the other hand if an alien had greater discriminative capacities than us so that it would constantly see things as changing colour, we would also say that he was experiencing colour illusions. This is because the meaning of colour terms is defined in terms of OUR discriminative capacities; which means that WE judge certain things in the world to be red, green etc. So relative to our form-of-life these other people would be suffering from a form of cognitive illusion.

Dennett concludes with the following statement:

Colour is not a primary physical property like mass, nor is it a complex of primary properties, a structural feature of surfaces. Nor again is it a private ‘phenomenal’ quality or an ‘emergent’ quality of certain internal states. Colours are what might be called functional properties. A thing is red if and only if when it is viewed under normal conditions by normal human observers it looks red to them, which only means: they are demonstrably non-eccentric users of colour words and they say, sincerely, that the thing looks red. There saying this does not hinge on their perusal of an internal quality, but on their perception of the object, their becoming aware that the thing is red (ibid, p.163)

I am not really sure whether Dennett really manages to avoid the problem of where the experience of red is located. However it should be obvious that he is not denying that colours exist, rather he is claiming that they are not paraded in a Cartesian Theatre.

In Part 1 of this paper we have shown that Dennett is denying that mental images exist. He treats us as robots like Shakey whose programming forces us to judge that we experience mental imagery. He explains away the fact that people report that it seems to them that they experience mental imagery by denying that ‘real seemings’ exist. Dennett’s account fails. He cannot explain why some people report experiencing vivid mental imagery while others report that they do not. His account fails to account for heterophenomenological reports of ordinary human subjects. Furthermore, when judged from a heterophenomenological point of view, his own reports are consistent with those we would expect from a non-imager. So we conclude that when it comes to mental imagery Dennett is guilty of making the Typical Mind Fallacy.

In our introduction we discussed Berman’s Typology: which divided philosophers into three main types. We argued that Dennett was a combination of Type 2 and Type 3 thinkers. We have seen in Part 1 that Dennett denies that things like colour-phi phenomenon, afterimages, and mental images are presented before the mind’s eye. This denial is consistent with the type of claims made by William James. Dennett’s denials are strong indications that he is not a Type 1 Mentalistic Thinker like Descartes; rather, he is like William James, a Type 2 Bodily Thinker. In Part 2 we will show that while Dennett is primarily a Type 2 Bodily Thinker he is also to a lesser extent a Type 3 Socio-linguistic Thinker. We will show this by evaluating Dennett’s views on the nature of pain.

PART 2: DENNETT ON PAIN

            Dennett primarily explored the nature of pain in the early days of his philosophical development between the years 1968-1978. His views on the subject of pain are subtle and understanding his views on the topic will give us an insight into the nature of Dennett’s type of mind.  We will begin by first discussing his views on pain as developed in his 1968 doctoral thesis Content and Consciousness. We will then discuss his later views as developed in his “Why You Can’t Create a Computer That Feels Pain”. Once this is done we will show what his views reveal about the nature of Dennett’s mind.

On page 6 of Content and Consciousness Dennett denies the identity theory. He argues that pains are too different from brain states for one to be identified with the other. He further cites Putnam’s argument that the identity theory must be false because it leads to absurd counter- intuitive results. Thus, if identity theory were true, then this would lead to the absurd claim that only creatures with identical chemistry could think the thought that ‘Spain is in Europe’; this is surely false. Clearly a robot or an alien could think the thought that ‘Spain is in Europe’.  So Dennett argues that the identity theory is false. However, without the identity theory we are faced with all of the problems of dualism (how do the mind and body interact etc.). He argues that a lot of people are drawn to the identity theory because it seems less problematic than the intractable dualistic picture. Nonetheless he argues  that identity theory, like dualism, is false, so we need a new way of dealing with these problems.

Dennett’s suggestion is a kind of blending of Ryle and Quine. Ryle had long noted that it was a category mistake to try and reduce mind talk to brain talk because they belong to different logical types. Critics of Ryle had long noted that he never really spelled out what he meant by ‘different logical levels’ and things belonging to different ‘conceptual categories’. Dennett aimed to make sense of talk of the mental and the physical belonging to different categories by appealing to Quine’s philosophy of language as spelled out in Word and Object. Quine had noted that certain phrases, such as ‘for-the-sake-of’, are not translatable into the syntax of quantification, so had no clear identity conditions which would force us to be ontically committed to the existence of ‘sakes’. So, for example, we can say things like ‘for-the-sake-of-john’, but we cannot say whether there is one ‘sake’ or two. We cannot say of ‘sakes’ that they have attributes such as being ‘a good sake’, or ‘a big sake’ etc. Dennett argues that despite appearances, terms like ‘sakes’ are non-referential.  Confusing terms which are non-referential with referential terms can result in a series of intractable problems.  To see this he asks us to think of the term ‘voice’. ‘Voice’ is used in various different phrases and these phrases give the appearance that ‘voice’ refers to an entity in the world. So when, for example, it is said that ‘John has lost his voice’, if one is an identity theorist one will assume that ‘voice’ is identical with some physical process going on in John, that the word ‘voice’ refers to a physical process x. There are problems with this view of ‘voice’. Suppose one were to identify John’s voice with a particular fact about John’s vocal cords. This identity is not complete; there are digital recordings of John’s voice which will survive even after John has died. Given that the recordings are all in different mediums it makes little sense to identify ‘voice’ with vocal cords, and CD recordings, and records: all these various things have in common is a disposition to cause certain experiences in normal people. Do we say that ‘John’s voice’ denotes various different concrete objects, so that ‘John’s voice’ is a universal? Do we say that ‘John’s Voice’ is somehow identical with all the various instantiations? If so then it is hard to make sense of claims like ‘John’s voice is strained’, ‘John has lost his voice’ etc. When we say that ‘John has lost his voice’ do we mean that his vocal tracts are damaged, and that he has lost all recordings of his voice? Dennett claims that such silly questions can be avoided by accepting that ‘voice’, like ‘sakes’, and ‘dints’ are non-referential terms. They are ways of talking which are useful in some contexts but which do not carve nature at its joints. He suggests that we can treat ‘the mind’ in a similar way to ‘sakes’ and ‘voices’, useful ways of talking which do not pick out entities in the external world.

The concept of pain is one of the central concepts for any theory of the mind. Some people have used the concept of pain as a clear instance of identity theory.  To be in pain state x is to be in neural state y. Dennett notes that this position has its difficulties because pains are nothing like brain states. If pain state x has intrinsic properties which are not shared by neural state y, then by Leibniz’s law they cannot be the same thing. So Dennett admits that brain states and pain states are not the same thing. Since he does not want to defend any kind of emergent dualism claims he instead decides to explain away the problem by analysing away ‘pain’ talk as being non-referential, just like ‘voices’, ‘sakes’ and ‘dints’.

Dennett notes that if we try to explain the nature of pain at the personal level we will run into a series of cul-de-sacs. (1) We cannot analyse away the notion that ‘I am in pain’.  If someone asks me how I know I am in pain, or how I can tell pain from non-pain no real explanation is possible. I could say that pain hurts but non-pain doesn’t, however this merely relies on the unanalysed notion of ‘hurt’, which will presumably have to be explained by invoking ‘pain’, so our purported explanation will be circular. The truth is that we just do discriminate pain from non-pain. It is automatic, and we follow no explicit theory when judging that we have experienced pain. (2) At the personal level we likewise just automatically locate our pains; again we do not follow an explicit theory when doing this.  (3) We cannot answer the question of what it is about painfulness that makes us want to avoid it. Saying we want to avoid pain because it hurts is no answer to the question, because this raises the unanswerable question of why we want to avoid being hurt? So because of the three reasons just noted Dennett correctly notes that we cannot explain pain at the personal level. Any such purported explanation will of necessity presuppose the undefined notion of pain. Since explanations at the personal level have failed, Dennett notes that perhaps we should try to explain things at the sub-personal level. However, he correctly notes that such a sub-personal explanation will necessarily involve a changing of the subject; at the sub-personal level we will no longer be speaking about pain:

But when we abandon the personal level in a very real sense we abandon the subject matter of pains as well. When we abandon mental process talk for physical process talk we cannot say that the mental process analysis of pain is wrong, for our alternative analysis cannot be an analysis of pain at all, but rather of something else-the motions of human bodies or the organization of the nervous system. Indeed, the mental process analysis of pain is correct. Pains are feelings, felt by people, and they hurt. People can discriminate their pains and they do so not by applying any tests, or in virtue of any describable qualities in their sensations. Yet we do talk about the qualities of sensations and we act, react and make decisions in virtue of these qualities we find in our sensations. Abandoning the personal level of explanation is just that: abandoning the pains and not bringing them along to identify with some physical event. The only sort of explanation in which ‘pain’ belongs is non-mechanistic; hence no identification of pain with brain processes makes sense… (Content and Consciousness, p. 103)

So Dennett is arguing that if we give a personal level explanation we will end up with a vacuous explanation, while we can give a sub-personal explanation which will be a real explanation, but will not deal with the personal level phenomenon of pain. Since the personal level explanation is a non-explanation we need an explanation at the sub-personal and this, of course, is not an explanation of the common sense term ‘pain’. So Dennett is an eliminativist about pain when it comes to scientific explanation.  However, it seems to be almost incoherent to claim that Dennett is an eliminativist about pain. In the above quote he claims that at the personal level people can say truly that they are in pain and that the pain is awful. So Dennett could be accused of making contradictory claims about the nature of pain: at the scientific level he is acting as an eliminativist about pain, whereas at the personal level he admits that pain exists and is awful.        On pain of admitting true contradictions into his theory Dennett needs to decide which of the two levels of explanation is correct. Dennett, as a good naturalist, sides with the scientific level of description and aims to explain away the personal level. He begins his attack on the personal level ten years later with his famous 1978 paper “Why you can’t make a computer that feels pain”.  In that paper Dennett showed that the personal level concept of pain was internally contradictory so could be eliminated without a guilty conscience.  While in his earlier discussion of pain in Content and Consciousness Dennett simply claimed that ‘pain’ was a non-referential term; using pain talk was a way of subjects organising their experiences. However, such ‘pain’ talk will disappear when one is describing organisms in a scientific manner.

Dennett’s offers the following physicalistic explanation of what is known at the personal level as pain:

When a person or animal is said to experience a pain there is afferent input which produces efferent output resulting in certain characteristic modes of behaviour centring on avoidance or withdrawal, and genuine pain behaviour is distinguished from feigned pain behaviour in virtue of the strength of the afferent-efferent connections-their capacity to overrule or block out other brain processes which would produce other motions. That is, the compulsion of genuine pain behaviour is given a cerebral foundation. (ibid, p. 106)

Immediately on reading the above passage the reader will object that Dennett is not talking about actual pain, he is rather talking about pain behaviour. Dennett anticipates this objection and replies as follows:

Now would this account of pain behaviour suffice as an account of real pain behaviour, or is there something more that must be going on when a person is really in pain? It might be supposed that one could be suddenly and overwhelmingly compelled to remove one’s finger from a hot stove without the additional ‘phenomenon’ of pain occurring. But although simple withdrawal may be the basic or central response to such stimulation, in man or higher animals it is not the only one. Could any sense be made of the supposition that a person might hit his thumb with a hammer and be suddenly and overwhelmingly  compelled to drop the hammer, suck the thumb, dance about, shriek, moan, cry, etc., and yet still not be experiencing pain? That is, one would not be acting in this case, as on a stage; one would be compelled. One would be physically incapable of responding to polite applause with a smiling bow. Positing some horrible (but otherwise indescribable) quality or phenomenon to accompany such a compelled performance is entirely gratuitous. (ibid, p.106)

Here again we see Dennett pointing out that the term personal level concept of ‘pain’ is explanatorily barren, non-referential, and pointless for a scientific theory. He argues that if we want to explain ‘pain’ we must do it in terms of behaviour and nerve impulses, and of course in this case we will not be explaining the personal level concept of pain at all. This approach of Dennett’s has won him few adherents; it seems to leave out the most important aspect of pain, its horrible feeling. Few people could accept an explanation of pain which left out the actual feeling of pain. Dennett’s later attacks on the notions of qualia and real-seemings were further designed to attack the notion that pain was a thing which we experienced, as opposed to a mere seeming. However, prior to developing his arguments against qualia and real seemings, he attacked the very coherence of the personal level concept of pain.

In his “Why You Can’t Make a Computer That Feels Pain” Dennett developed a detailed thought experiment on whether it would be possible to build a computer which feels pain.  He begins by discussing a computer simulation of the nature of pain. He notes that this simulation of pain will not feel pain any more than a computer simulation of a hurricane will get us wet. His discussion of the pain simulation begins by noting that it will have to be able to make accurate predictions of pain behaviour. Thus it must predict that, for example, a person who has an anvil dropped on his unanaesthetized left foot will jump around in agony on his right foot, with a tear in his eye, screaming. He anticipates that people will be unsatisfied with such a programme as it leaves out the internal aspect of the behaviour. So he firstly improves on the programme by stipulating that it can predict the internal behaviour of c-fibres etc.  To the objection that the program still leaves out the feel of pain, he stipulates that the programme can report pain states and how vivid the states are:

Again we feed in

‘An anvil is dropped from a height of two feet on S’s left foot,’

And this time we get back:

There is a pain, P, of the in-the-left-foot variety in S; P begins as a dull, scarcely noticeable pressure, and then commences to throb; P increased in intensity until it explodes into shimmering hot flashes of stabbing stilettoes of excruciating  anguish (or words to that effect)….; S’s C-fibres are stimulated… (Why You Can’t Make a Computer That Feels Pain p. 418)

Dennett argues that there is no reason in principle that we can’t build a programme capable of making the above predictions. To people who argue that the above description leaves out actual pain experience, Dennett replies that the above is meant to be a simulation so it is no real objection that it leaves out actual pain, any more than it is an objection to a simulation of a hurricane that it does not make you wet.

He argues that another project which the skeptic may ask for is the building of a machine which can actually experience pain. This is a machine which does things in its environment as opposed to merely simulating them. At this point Dennett discusses our old friend Shakey. He argues that what the skeptic actually wants us to do is build a Shakey type robot which can actually feel pain.

Dennett argues that to build our robot we will need to instantiate our simulation/ theory of pain into the robot’s circuits. We will have to modify the theory in ways that connect with the robot’s motor abilities. So it is easy enough to build a robot that can shed a tear. It is obviously a much more complicated matter to build a robot with sufficient motor control to dance around on one foot when struck on the foot. However, none of this is a problem in principle. It just requires complex engineering. Such a robot would have pain behaviour and the ability to report that it was in pain, Dennett even argues that such a robot will have the ability to detect pain intensity:

But what about the rest of our earlier simulation? What happens to the hot flashes and dull throbs mentioned in our descriptive program? These parts of the output we transform into labelled flashing lights and leave them that way: sometimes the ‘dull throb’ light is on (blinking slowly if you like) and sometimes the ‘hot flash’ light is on. If the skeptic insists on more verisimilitude here, what can he be asking for? Remember that these lights are not blinking randomly. The ‘dull throb’ light goes on only at appropriate times, the robot can then say ‘there is a dull, throbbing pain’ and the other apposite side effects of dull, throbbing pains are presumed to be arranged to coincide as well. (ibid, p. 420)

Most people will not be happy with this robot analogue; they will argue that the robot is not actually experiencing pain despite the complex pain behaviour that it is engaging in. Dennett thinks that such people are looking for a robot which can have identical pain to human.  He argues that such people are holding AI researchers to very high standards; nobody would argue that a robot cannot walk because its legs are not made of flesh and bone and hence are not identical with human legs, so why claim that robot pain must be identical to human pain? As long as they are functionally equivalent Dennett sees no problem with equating his imagined robot’s pain with human pain. However, he recognises that most people do not agree with him on functionalism. Such skeptics will not be happy until computers feel actual pain in the same way normal humans do. Dennett, though, argues that since the personal level concept of pain is in such bad shape and so full of contradictions it may be impossible to instantiate in any creature, not just a robot.

He further argues that an assumption above was that we would have no difficulty constructing a theory of pain; our difficulty would come from an attempt to instantiate this theory in the robot. However this assumption can be questioned, so to test it he tries to construct a theory/model of pain which is consistent with our various different intuitions about pain. His model tries to take account of various different neurological facts about our pain receptors. So, for example, he notes that when through injury our nociceptors are stimulated, the signals travel to the brain through two different types of fibres: rapidly through the large myelinated A-fibres, and slowly through the narrow myelinated C-fibres (ibid p 424). He notes that A-fibres and C-fibres have two different functions. C-fibres transmit ‘sharp’, ‘deep’, ‘aching’ or ‘visceral’ pains, while A-fibres cause ‘sharp’, ‘bright’, ‘stabbing’ pains. Following Melzack and Wall, Dennett notes that the neurological evidence indicates that A-fibres inhibit the effect of C-fibres, closing the gate to pain input transmission.  Simply put, the A-fibres transmit to the limbic system (shared with all animals), while the C-fibres are transmitted to higher cortical areas which only higher primates have. The fact that different fibres transmit to different brain areas predicts a distinction between the hurtfulness and awfulness of pain[9].  Dennett goes on to develop his version of Melzack and Wall’s model of pain. He then goes on to test how this model deals with a variety of reports about the nature of pain. So, for example, a lot of people report that when they concentrate on their pain it loses some of its adverse qualities. This fact is explained because our concentration “raises the volume” of A-fibres, thus inhibiting the effects of C-fibres. He extends this discussion to show how the model can handle data such as the effects of yoga, phantom limb phenomenon etc.

He then considers the effect of various different drugs on pain and notes some strange properties which his model can handle:

A further curiosity about morphine is that if it is administered before the onset of pain (for instance, as a pre-surgical medication) the subjects claim not to feel any pain subsequently (though they are not numb or anesthetized-they have sensation in the relevant parts of the bodies); while if morphine is administered after the pain has commenced, the subjects report that the pain continues (and continues to be pain), though they no longer mind it. Our model suggests that morphine and other analgesics must work on the old low path while leaving the new high path relatively in order, and such is the case. (ibid p. 432)

So again Dennett’s model is shown to capture the relevant facts about pain phenomenon.  He goes on to discuss how his model can successfully handle facts of analgesia before returning to his original question of whether the theory he has sketched is really a theory of pain at all.

His model of pain is represented in a flow chart, and he asks whether we can locate pain, as opposed to its typical cause and effects, in his flow chart. He admits that his flow chart does indeed leave something out, the actual experience of pain. The flow chart is pragmatically useful and can explain things like hypnotic suggestion, the effects of drugs, and the effects of meditation, however it leaves the philosophical questions untouched. Because it does not deal with the feel of pain, he argues that explanations at the sub-personal level will leave out the personal level explanation of pain. However this is not necessarily a bad thing, because the personal level concept of pain is incoherent.

To show this he discusses the case of pain asymbolia which is a brain disorder that results in people who are in pain but not minding having the pain. Such people will laugh if pricked violently with a needle. They admit that they feel the pain; they just do not mind having it. The sub-personal level of explanation can account for this phenomenon by showing lesions near the brain which damage people’s ability to have an appropriate response to pain stimuli, though they still feel the pain. However, at the personal level it seems impossible that a person could feel pain but not mind feeling the pain. Dennett claims that this is because the personal level explanation of pain is incoherent and inadequate.

He further bolsters his views on the incoherency of the personal level concept of pain by discussing a variety of other curious phenomena:

The ordinary use of the word ‘pain’ exhibits incoherencies great and small. A textbook announces that nitrous oxide renders one ‘insensible to pain’, a perfectly ordinary turn of phrase which elicits no ‘deviancy’ startle in the acutest ear, but it suggests that nitrous oxide doesn’t prevent the occurrence of pain at all, but merely makes one insensible to it when it does occur (as one can be rendered insensible to the occurrence of flashing lights by a good blindfold). Yet the same book classifies nitrous oxide among analgesics, that is, preventers of pain (one might say painkillers) and we do not bat an eye. (ibid, p. 443)

Dennett notes that the ordinary language concept of pain is full of inconsistencies like the above one. He goes on to claim:

The philosophical questions that an identity theory (or other ‘philosophical’ theory of pain) would be designed to answer are generated by our desire to put together an account that consistently honours all, or at any rate most of our intuitions about what pain is. A prospect that cannot be discounted is that these intuitions do not make a consistent set. (ibid. p. 445)

Again Dennett is arguing that there may be no explanation of the personal concept of pain because such an inconsistent set cannot exist outside of the vague space of beliefs. In ordinary language people sometimes speak as though pains only exist in the mind of the beholder, yet at other times ordinary language implies that we can have unperceived pain. Again he asks how such a contradictory object can exist.  As a result of these contradictions Dennett argues that the ordinary concept of pain is impeached, and what we require is a better concept. Ultimately he recommends we need to do away with the ordinary concept of pain altogether:

I recommend giving up incorrigibility with regard to pain altogether, in fact giving up all ‘essential’ features of pain, and letting pain states be whatever natural kind states the brain scientists find (if they do find any) that normally produce all the normal effects. When that day comes, we will be able to say whether masochists enjoy pain, whether general anesthetics prevent pain or have some other equally acceptable effect, whether there are unfelt pains, and so forth. These will be discoveries based on a somewhat arbitrary decision about what pain is, and calling something pain doesn’t make it pain. (ibid, p. 449).

This discussion of Dennett’s amounts to an admission that the ordinary language concept of pain will not be explained by our scientific theory. However, this is something we do not need to worry about because the ordinary language concept does not really pick out anything; it is just a bad theory. He further concludes that we will not be able to build a robot which feels pain (in the incoherent ordinary language sense of pain), however we will still be able to build a robot that can feel pain in the scientific sense of implementing a theory of pain.

So we have seen that in Dennett’s two main discussions of pain he denies that the ordinary concept of pain refers to a real entity in the external world. In his earlier discussion of pain he argues that pains cannot be identical with brain states without breaking Leibniz’s law. Since he could not find a good reason to break Leibniz’s law and he argued that Dualism was untenable he concluded that a particular brain state cannot be identical with a particular pain. He avoided the claim that pain states actually referred to brain states by analysing pain language as being non-referential in the same way words like ‘sakes’ and ‘dints’ are non-referential. He argued that any scientific explanation of pain behaviour and verbal reports will not account for the ineffable qualia of pain. However he does not think that this is a problem because he has already shown that the ordinary language concept of pain is merely a way of talking; it does not refer to any entity.

Ten years later in his Why You Can’t Make a Computer That Feels Pain” Dennett expanded on the above points. He further analysed the ordinary language concept of pain and found it to be contradictory. This contradictory object, he argued, could not exist so our scientific concept of pain obviously does not need to be consistent with a non-existent object. Such a strange eliminativist view of pain flies in the face of common sense. Surely if we can know anything, we can know we are in pain. It seems absurd to deny something as fundamental as the existence of pain. In one sense, he argues, pains are real; they are as real as dollars, in the sense that they are intentional objects. We can have a scientific theory of dollars, but they will not answer questions like: how much does that cost in real money. The same is true of pain. People will have beliefs and intuitions about the nature of pain, however a scientific theory cannot always capture all of people’s intuitions. Since Dennett thinks he has shown that the ordinary language talk of pain is vague and contradictory he thinks that a scientific theory can disregard this talk. This means, of course, that our theory is no longer a theory of pain.

Obviously, Dennett’s views on pain have serious implications for our theory that Dennett is guilty of the typical mind fallacy. In the case of mental imaging we surmised that Dennett as a non-imager assumed that we were all non-imagers and unjustifiably built his theory on the assumption that we are all like him. One strand of evidence we used in support of this position was Dennett’s comparison of our ability to call up mental imagery with Shakey the robot’s ability to call up mental imagery. Shakey did not, of course, actually experience mental imagery; it merely reported having such experiences. Dennett argued that we were like Shakey in the sense that, for us, it was a case of all tell and no show. We argued that Dennett was right about his kind of mind, but incorrect that all minds were like his.

By parity of reasoning we should treat pain in the same way as mental imagery. So the reader may expect us to argue that Dennett has no pain experience and this is why he is an eliminativist about pain. This reader may then expect us argue that Dennett is guilty of the typical mind fallacy of incorrectly assuming that all people’s pain experiences are like his. In a sense we do argue in this manner. However, before outlining our argument, we will consider an obvious objection to our position.

It is plausible to claim that people have radically different experiences of mental imagery, from non-imagers to eidetic imagers, while not showing much overt behavioural differences. However, it is completely implausible to assume that people who have no pain or radically powerful pain experiences will give off few behavioural signals. In fact we know from empirical studies that people who do not experience pain are easily discovered, and are discovered at a very early age. Such people suffer from what is called congenital analgesia, and they do not typically live very long. Here is a famous description of a person suffering from the condition:

As a child, she had bitten off the tip of her tongue while chewing food, and has suffered third degree burns after kneeling on a hot radiator to look out the window. When examined… she reported that she did not feel pain when noxious stimuli were presented. She felt no pain when parts of her body were subjected to strong electric shock, to hot water at temperatures that usually produce reports of burning pain or to prolonged ice bath…A variety of other stimuli, such as inserting a stick up through the nostrils, pinching tendons, or injections of histamine under the skin-which are normally considered forms of torture-also failed to produce pain.

       Miss C. had severe medical problems. She exhibited pathological changes in her knees, hip and spine, and underwent several orthopaedic operations. Her surgeon attributed these changes to the lack of protection to joints usually given by pain sensation. She apparently failed to shift her weight when standing, to turn over in her sleep, or to avoid certain postures, which normally prevent the inflammation of joints.

All of us quite frequently stumble, fall or wrench a muscle during ordinary activity. After these trivial injuries, we limp a little or we protect the joint so that it remains unstressed during the recovery process. This resting of the damaged area is an essential part of recovery. But those who feel no pain go on using the joint, adding insult to injury. (Melzack and Wall 1988, pp. 4-5)[10]

I inserted this long quote because it illustrates the dramatic nature of congenital analgesia, and shows that Dennett is obviously not suffering from this illness.  We are not making the claim that Dennett does not experience pain. Rather we are claiming that his pain experience is different from those of the ordinary person. We argue that Dennett, being primarily a linguistic and physicalistic thinker, experiences pain in a different manner to other people. Furthermore, the nature of his experiences results in him drawing different philosophical conclusions than those with other types of minds.

In Berman’s 2008 “Philosophical Counselling for Philosophers” we saw how, in the case of Berkeley and Locke’s debate about abstract ideas, their psychological abilities resulted in them having different philosophical views. More recently in his 2013 paper “Do Personality Traits Mean Philosophy is Intrinsically Subjective” Geoffrey Holtzman has demonstrated that philosophers’ personality traits correlate strongly with how plausible they find various famous philosophical thought experiments. These studies both show that philosophers’ psychological abilities can play a big role in determining their philosophical views. Obviously, though, the fact that some psychological variation accounts for some philosophical differences between philosophers tells us nothing about whether this is the case with Dennett and pain.

To establish whether Dennett has different pain experience we need to first establish that such variation exists and then deal with the problem of whether Dennett’s pain experience is indeed different from others. The case of congenital analgesia obviously differs from normal pain experience; likewise pain asymbolia which we discussed above differs from normal pain experience. However, these cases are severe disorders which are easily detectable by behavioural tests, Dennett obviously does not suffer from disorders of this kind. Nonetheless there are some empirical studies which do indeed show that there are variations in the way people experience their own pains. In their 2011 paper “Genetic Basis of Pain Variability: Recent Advances”, Young et al. showed that people do indeed display variability in the intensity of the pain they feel. This variability was demonstrated to be related to race; for example, African-American and non-Caucasian people reported greater pain than Caucasians in the same clinical settings (“Genetic Basis of Pain Variability”: p.1). It was also shown that females reported greater pain than males in the same clinical settings (ibid: p.1). These variations were all variations of the intensity of pain. Young et al. trace these variations to genetic causes. In his 2005  paper “Sex and Gender Issues in Pain” Roger Fillingim has shown that the sexes do indeed vary in the intensity of pain they report in the same clinical settings. Fillingim focuses on social determinants of such pain behaviour as opposed to the genetic focus of Young et al. Another interesting study of note is Thomas Gosden’s 2008 psychology PhD thesis “Images of Pain: Exportation of the Characteristics and Functions of Pain Related Mental Imagery in Chronic Pain”. In this paper Gosden demonstrates that people with chronic pain have more severe pain when it is associated with involuntary mental imagery.

None of the studies I have mentioned directly support our claim that Dennett’s views are derived from his own pain experience. However the papers are very suggestive. The papers show that philosophers’ individual psychological abilities and personality can affect their philosophical views. We have also shown that there is ample empirical evidence that people have wide variation in their subjective experience of pain. Given Dennett’s strange eliminative views on pain the studies we have gathered together show that it is worth researching into whether his strange views are indeed derived from his individual experiences. To examine this more fully we will explore Dennett’s phenomenology of pain as revealed by his theoretical writings on the nature of pain.

There are four key points which indicate that Dennett has a distinct experience of pain:

(1)   Dennett is a Physicalistic-Linguistic Thinker. Hence, like most physicalistic thinkers he is a monist[11]. He explains away Dreams, Pains, and Mental Imagery  in linguistic terms showing that he is predominantly a linguistic thinker. He is incapable of even imagining pain outside of language.

(2)   When discussing pain in Content and Consciousness he focuses on the fact that the word is not referential. Whether the word ‘pain’ is referential or not is irrelevant to whether pain itself is identical with a brain state. The fact that Dennett cannot conceive of pain itself and focuses on the word ‘pain’ shows that he conceives of pain in linguistic terms.

(3)   In his “Why You Can’t Build a Computer that Feels Pain” he again focuses on how the ordinary language word ‘pain’ has contradictory properties and is therefore theoretically useless. He never considers whether it is language itself which is defective as opposed to pain itself which exists independently of our linguistic representation. Again this is because he cannot conceive of experience outside of language.

(4)   We are not arguing that Dennett has no experience of pain, merely that he has no non-linguistic experience of pain. He cannot see beyond language. All of his consciousness is overwritten by his linguistic and physicalistic nature.

Our analysis of Dennett’s views on mental imagery, color-phi, after images, and pain reveal that he has a distinctive type of mind. Like all Type 2 thinkers he has no experience of the Cartesian Theatre. The fact that Dennett does not have any experience of the Cartesian Theatre is evident from his discussions of mental imagery and ‘real seemings’. His discussion of pain reveals that he is also to a lesser degree a Type 3 thinker. Dennett has a bodily experience of pain, however his experience of pain is deeply immersed in his linguistic competence. He is incapable of even conceiving of pain outside of language. This fact is evident from his theoretical views on pain which centre entirely on the ‘concept’ of pain and ignore the actual experience of pain.

We will get a clearer view of Dennett’s experience of pain by contrasting his views with those of other thinkers. We analysed Thomas Nagel as being a blend of a Type 1 and a Type 2 thinker and, as one would expect, his views on pain are entirely different than Dennett’s.

With Nagel and Dennett, buried beneath the surface of their argumentation, what one gets is a clash of intuitions. Or, to put things more precisely, Nagel has stronger intuitions on certain topics than Dennett, and this influences how they react to philosophical arguments. Throughout the years as Nagel and Dennett have sketched their alternative theories of the mind the issue of intuition has come up over and over again. Here is Nagel in his The View From Nowhere:

But philosophy is not like a particular language. Its sources are preverbal and often precultural, and one of its most difficult tasks is to express unformed but intuitively felt problems in language without losing them… ( The View From Nowhere p.11)

Dennett has often been perplexed by this attitude of Nagel’s, and has correctly pointed out that a lot of scientific progress has been made by not treating our intuitions as sacrosanct. Here is Dennett criticising Nagel’s appeals to intuition:

Nagel is the most eloquent contemporary defender of the mysteries and anyone who suspects I have underestimated the problems I pose for my theory will be braced by Nagel’s contrary assertions. Assertions, not arguments. Since Nagel and I start from different perspectives, his arguments beg the question against a position like mine:  what counts for him as flat obvious, and in need of no further support, often fails to impress me. I assume that whatever the true theory of mind turns out to be, it will overturn some of our prior convictions, so I am not cowed by having counterintuitive implications pointed out to me. No doubt, Nagel who calls his book ‘deliberately reactionary’, is equally unshaken when it is pointed out that his allegiance to certain intuitions is all that prevents him from escaping his perplexity down various promising avenues of scientific research ( The Intentional Stance p.6)

Dennett admits that he and Nagel are begging the question against each other; however he argues that his approach has yielded more pragmatic success so he will continue using it.  Our contention is that both thinkers adopt the stance they do because of the nature of their minds.

This comes out when we see how Nagel views the phenomenon of pain. Nagel’s views on pain are diametrically opposed to Dennett’s.  Nagel has very definite views on the nature of pain. He makes the claim that pain is something that is intrinsically bad, not just something we hate (Mind and Cosmos, p.110). He further argues that an instrumentalist account of pain does not capture the actual badness/wrongness of pain. An instrumentalist account of pain is a Darwinian type explanation. So, for example, it could be argued that the feeling of pain is a warning to people that something is wrong, and that action needs to be taken.  People who are incapable of feeling pain would be at a distinctive selective disadvantage. Nagel does not doubt that an explanation of this kind is possible. However, he argues that this Darwinian explanation does not explain the objective judgement that pain is something that is intrinsically a bad thing. For Nagel, pain has a dual existence: (1) a feeling for which we can give an evolutionary explanation, (2) an object which we can reflect on and judge to be intrinsically bad. He even makes the claim that his objective value judgements are so strongly felt by him that even if Darwinian theory contradicts his views he will reject the Darwinian theory (ibid p. 110).

Dennett of course takes the opposite view to Nagel. For him our intuitions take a secondary role to scientific discoveries. Obviously, part of the reason that Dennett adopts this attitude is because of the fact that in physics experimental research has shown how incredibly unreliable our intuitions are. Our intuitive conception that the world works according to contact mechanics was refuted by Newton three centuries ago. So Dennett thinks that we should be open to similar things happening in the philosophy of mind. We argue that Dennett, though partly motivated by reasons, is also influenced by the nature of his own experiences.

Dennett and Nagel have clearly got entirely different ideas of what the nature of pain is. Nagel thinks that pain is something we feel which goes beyond anything we can express in language (The View From Nowhere, p. 11). While Dennett views pain as something immersed in language, he thinks of us as Joycean machines, creatures who are overridden by language (Consciousness Explained, p. 275). We argue that their different conceptions of pain are based on their own inner experiences. Their different theoretical descriptions of the nature of pain clearly shows how they experience pain differently.

SOME OBJECTIONS

When discussing Dennett’s views on mental imagery, colour phi, and after images we came to the conclusion that he was denying the existence of phenomenal consciousness. We argued that on Dennett’s view we are told by our brain that we have had such and such an experience, and that this report grounds our belief, not a direct experience of phenomenal consciousness.  However, Dennett’s discussion of the phenomenon of change blindness complicates my analysis of his views on the nature of consciousness. Before going into how his views on change blindness complicate our interpretation of his theory, we will first outline what the phenomenon of change blindness is, and how Dennett interprets this phenomenon.

Change blindness is a well-known phenomenon in psychology.  A famous example of change blindness was studied by Resnik, O’Regan and Clark (1997). They present subjects with near identical photographs for 250 milliseconds each, while between each photograph a blank screen is presented for 290milliseconds (Dennett: Sweet Dreams, p. 83). The near identical photographs are flashed back and forth for subjects to watch. Subjects typically do not register any changes in the photographs for at least 30 seconds.  Once they register the change (a white door turning brown at the corner of the picture), or are told about it, the change becomes obvious to them. Furthermore, prior to registering the change on a conscious level their body seems to be aware of the changes occurring. Thus Dennett notes:

In the phenomenon of change blindness for colour changes, for instance, we know that the colour sensitive cones in the relevant region of your retina were flashing back and forth, in perfect synchrony with the white/brown quadrangle. (ibid, p. 88)

He argues that we need to do further tests to understand what areas of the brain are lit up after people become aware of the changes in what they are viewing.  Dennett argues that the phenomenon of change blindness poses serious difficulties for the notion of qualia. To illustrate this point he typically gives the change blindness test to people in lectures.  After people figure out that the change has occurred he asks them the following question:

Now before you noticed the panel changing color, were your color qualia for that region changing? We know that the cones in your retinas in the regions where the light from the panel fell were responding differently every quarter of a second, and we can be sure that these differences in transducer output were creating differences farther up the pathways of colour vision in your cortex. But were your qualia changing back and forth white/brown/white/brown-in time with the colour changes on the screen? Since one of the defining properties of qualia is their subjectivity, their “first-person accessibility,” presumably nobody knows- or could know- the answer to this question better than you. So what is your answer? Were your qualia changing or not? (Sweet Dreams p. 83)

Dennett argues that whatever answer is given to this question will make trouble for the theoretical notion of qualia.  The question we want to consider is whether Dennett’s views on change blindness affect our claim that he is denying the reality of phenomenal experience?

If one considers Dennett’s solution to the colour phi phenomenon, the problem was that the colour change was not happening in the external world; likewise he denied that there was a mental presentation of a red-spot changing into a green spot in a Cartesian theatre. So we had a problem of saying what was occurring with the color phi phenomenon. Dennett argued that our brain tells us that a red light turned into a green spot. The colour phi phenomenon is not presented to the mind’s eye; rather, our brain just tells us that we saw the change. Dennett’s solution has led people to accuse him of only explaining access consciousness but of ignoring phenomenal consciousness.

His position on change blindness helps us better understand why he seems to ignore phenomenal consciousness. For some people the idea of phenomenal consciousness is parasitic on the notion of qualia. When people claim that phenomenal consciousness is a basic fact of our experience they are typically referring to our subjective experience of qualia. So qualia is the basic feel of an experience;  a taste of beer, or a taste of coffee, the feel of touching a smooth or rough surface etc. Qualia is the first person experience. Philosophical lore has it that no amount of third-person scientific research can reveal what the intrinsic experience of qualia in fact is. This is because qualia is an intrinsic first-person phenomenon. Qualia, our intrinsic first-person experience, is of course our phenomenal consciousness. In the case of colour-phi Dennett denied that anything is presented before the mind, while in the case of change blindness he went further and argued that the notion of qualia makes little sense. So it could be argued that his change blindness position is even more radical than his colour-phi position.

However, it still seems to be a bit of a stretch to understand Dennett’s claims on change blindness to be an argument against the notion of phenomenal consciousness. Dennett does after all argue that after about 30 seconds people come to recognise the changes between the two pictures. To some, this indicates that for Dennett, when people become aware of the changes in the pictures the change enters phenomenal consciousness. To think this is to radically misunderstand Dennett’s views on consciousness. Dennett argues that there are three different ways to answer his above question about whether our colour qualia were changing prior to our noticing the changing pictures: (1) Yes, (2) No, (3) I do not know. If someone answers yes then this shows that one of the key criteria of qualia is falsified; the criterion is that qualia are experienced directly in a first-person manner. A person who answers yes is inferring the nature of qualia from a third-person point of view. If a person answers no, he argues, they risk trivialising the notion of qualia. Our qualia get reduced to our judgements that they occurred. This of course means that our qualia can no longer be considered to have an intrinsic nature; their nature will be relational in that they rely on a judgement of a subject for them to exist. Furthermore, since zombies are behaviourally indistinguishable from ordinary people from the point of view of behaviours then a person who answers no to the above question will be committed to the view that the zombie has qualia. After all the zombie will make the same qualia judgements as everyone else.

If people answer that they do not know whether the qualia are changing back and forth then there can be two reasons for their confusion: (1) Despite what they originally thought they do not really know what qualia are, or (2) They do not know what the answer is because qualia have properties which go beyond the reach of both third-person and first-person science. Dennett argues that a person who makes Claim 2 is placing themselves outside science so their views are not really candidates for serious consideration. So he basically argues that Claim 2 is the correct conclusion to draw from the fact of change blindness.

So Dennett’s change blindness argument purports to show that the philosophical notion of qualia is a mess. He further supports this claim with his thought experiment “The Strange Case of Mrs Clapgras”. In his earlier paper “Quining Qualia” Dennett presented a series of thought experiments which were designed to show that the philosophical notion of qualia was incoherent.  The combined force of his various arguments is designed to make people less comfortable with using the term qualia, to make them realise that despite appearances the term qualia is senseless.

If we take Dennett’s arguments against qualia seriously this will undercut one of the primary objections against his views on the colour-phi phenomenon. The objection to Dennett’s colour-phi phenomenon was that it was phenomenologically inaccurate. It is argued that we know from experience that we observe the colour change from red to green, and that we then form our judgements based on the observation of this change. We do not form the judgement independently of the observation. This argument is designed to show that from the point of view of first-person experience we know that Dennett’s conclusions on colour-phi are false; we directly experience the phenomenon despite what he thinks.  However, if one takes his arguments against the notion of qualia seriously, one is given pause to doubt whether the argument from first person experience is sufficient to refute Dennett’s claim.

A person who used first-person experience as an objection against Dennett’s views on color-phi, mental imagery etc. would be arguing that despite the theoretical problems with these phenomena (no evidence for the existence of figment, no evidence for the existence of a mind’s eye), our first-person experiences shows that the phenomena exists. They would argue that our phenomenal experience grounds our judgement; it is not the case that our brain merely tells us that we experience mental imagery; we actually experience the qualia of an image being presented before the mind’s eye.  Dennett’s discussion of change blindness (and his other thought experiments) show that the above argument against his view of the mind does not work.

Qualia are often presented as the most basic thing we can know. We can doubt all else but we cannot doubt how our most basic experiences seem to us. What Dennett’s thought-experiments purported to show was that despite what we may believe we do not really have any firm handle on what these qualia are. He showed that when we examine these qualia closely we do not in fact have a clear grasp of their nature. So, for example, the card trick showed that while we may think we are experiencing definite qualia in our peripheral vision we actually are not. In the case of change blindness we have no clue as to whether qualia are being flashed back and forth prior to our noticing. So all of Dennett’s thought-experiments combined do seem to show that the philosophical notion of qualia is a mess. This fact has serious implications for those who argue against Dennett’s explanation of the colour-phi phenomenon by appeal to facts about how things seem to them. Since Dennett has cast serious doubt on the reliability of our claims about how things seem to us, then appeal to such seemings as an incorrigible foundation to refute his views on mental imagery and colour phi is questionable to say the least.

Critics of our position could argue that Dennett’s arguments against qualia undercut our claims that he is guilty of the typical mind fallacy. A central aspect of our argument centred on people’s introspective reports about what they experience, while Dennett’s various different arguments show that people are not authoritative about what they experience and that they do not really know what they mean when they use the term qualia. We argue, on the contrary, that Dennett’s arguments against qualia have no effect on our central claim.

If we argued entirely from people’s introspective reports and nothing else then Dennett’s arguments about qualia would undercut our claims. However we provide much more evidence than this. Firstly we rely on neurological reports to support our claims. We have discussed evidence from people like Kosslyn, Ganis et al. which shows that people who are imagers have the occipital lobe activated to a greater degree than non-imagers. So our introspective reports about variations in people’s abilities to form mental imagery are backed up by impressive neurological evidence, and by the impressive behavioural evidence summarised earlier in the paper. A critic could reply that based on standard definitions of qualia we cannot appeal to neurological evidence to support our claims. This criticism does not apply to us; we are not using the theory-laden notion of qualia, rather we just speak of experience. We see no valid reason that we should be barred from using neurological evidence and behavioural evidence to support our claims. We agree with Dennett that fantasies about zombies which are identical to ordinary people but which lack consciousness are pointless. When we speak of people who have different mental abilities we argue that these different abilities can be detected via behavioural tests, introspection, verbal reports and neuroscientific tests etc. Using such diverse strands of evidence has led to discoveries of previously unknown subjective variation in mental imagery. It has led to a diagnosis of coloUr blindness, synaesthesia etc.

Another type of evidence which we rely on is behavioural evidence, more precisely verbal behaviour, in the form of texts that our subjects produce. People’s philosophical views on various topics often inadvertently reveal the nature of their type of mind. So bearing in mind the fact that our evidence is not limited to simply introspective reports, then it should be obvious that Dennett’s evidence that introspection is sometimes unreliable and gives us incomplete knowledge on a topic does not refute our Typical Mind Fallacy argument.

Nonetheless it does raise a more fundamental question for our theory. We have been assuming that Dennett disagrees with us that people have variations in their imagery abilities. However this does not have to be the case. Consistent with his own theory of consciousness he could argue that it is true that people report variations in their ability to form mental images, also that it is true people who claim to be non-imagers and people who claim to be imagers have different patterns of neural activation. So far all of this is consistent with his theory of consciousness.  Nothing in Dennett’s theory forces him to deny any of the previous facts about imagery. What Dennett has to argue is that in the case of the imager and the non-imager neither of them is presented with any imagery. So imagers will have some process in their brain which results in their brain forcing them to form the judgement that they experience mental imagery (in a similar way that we form the judgement that we experience colour-phi). Non-imagers’ brains will not form judgements of experiencing imagery. Now obviously if Dennett wants to form such a model he will need to provide neurological evidence to support it; however he would argue that he is more likely to find evidence for his model in the brain than he is to find a Cartesian theatre in the brain. The main point is that there is no problem in principle with him acknowledging that people have variations in their mental imagery.

The problem with Dennett’s model is that it does not do justice to the phenomenology of mental imagery. People (even poor imagers like me) experience mental imagery. It is more than being told that they experience such imagery, it is an actual experience. This reply of course brings us back to our original debate. Can we be so sure that we have a direct experience of something? We would argue that we can. Our experiences are foundational and nothing Dennett has said changes this fact. His arguments against qualia are arguments against a philosophical construction; they are not arguments against experience. Nobody who has direct experience of colour-phi, mental imagery and pain could deny that direct experience. However, for a thinker like Dennett whose experience is primarily bodily and linguistic his theory of consciousness will make perfect sense. In fact we would expect a person with his type of consciousness to endorse the type of views that he does.

 

CONCLUSION

Dennett’s eliminativist views on consciousness have puzzled interpreters for years. Some people find his arguments utterly compelling, while others believe that his arguments leave out key aspects of the mind. We have tried to make plausible the claim that such disagreements result from people having different types of mind and not factoring these differences into their theories. We suggest that philosophers can avoid making such mistakes by engaging in introspective research, submitting themselves to psychological tests etc. Understanding individual variation of minds and how such variation influences theorising can only help to make philosophy more objective than it has heretofore been. Ours is but a first step into what we hope to be a more scientific way of practising philosophy.


[1] William James quotes taken from Berman’s Penult.

[2] A non-imager interviewed by Galton, the astronomer Major John Herschel held a similar view to Dennett:      “ The questions presuppose assent to some kind of proposition regarding ‘the minds eye’ and the ‘images which it sees… It is  only by a figure of speech that I can describe my recollection of a scene as a “mental image” .” (Galton 1907). Galton’s tests revealed that Herschel was in fact a non-imager. We make the same claim about Dennett.

[3] In his 1958 paper “Concerning Mental Pictures” Arthur Danto describes his own imaging abilities whereby he can perform precisely the tasks  which Dennett  claims are impossible. This again indicates that Dennett is describing his own types of images rather than the images which other subjects have.

[4] See David Berman: A Manual of Experimental Philosophy (2009).

[5] See Bill Faw: Outlining a Brain Model of Mental Imaging Abilities (1997).

[6] See the Introduction of Berman’s Berkeley and Irish Philosophy,   his A Manual of Experimental Philosophy, and his Philosophical Counselling for Philosophers.  Also  William James Principles of Psychology for historical evidence of the typical mind fallacy.

[7] The data on Dream Experiences and Mental Imagery is only suggestive. We are not claiming a proof that people who are non-imagers are always vivid dreamers.  Rigorous experimental tests will need to be done in order to prove this conclusively.

[8] See Kolers, P. A and Von Grunau (1985) “Shape and Color in Apparent Motion” Vision Research 16 pp329-335.

[9] I am here simplifying Dennett’s model, however the simplification does not effect the argument of the paper.

[10] Quote taken from Nikola Grahek: Feeling Pain and Being in Pain p 8.

[11] The link between Monisms an physicalistic types is explored in more detail in Berman 2008 Penult.

Hurford, and Davidson: Animal Conceptual Abilities.

“The term ‘concept’ itself is a battleground. On the one hand, some, particularly some philosophers deny that non-humans can ever have concepts. For them a concept is essentially bound up with language; only creatures with language, so only humans, can have concepts. This view is now substantially eroded, and a majority of researchers are happy to talk about concepts in non-human animals” (Hurford: The Origins of Language p.61)

Hurford’s cursory mention that a lot of philosophers think that language is necessary for a creature to have concepts is uninformative. He admits that a lot of philosophers hold this view but he doesn’t say who these philosophers are or what arguments they present to support their position. Furthermore, his mention that the view is now eroded; is not very helpful as he does not provide any evidence of why the view is eroded; what arguments and experimental evidence have been presented which make the view seem so untenable. In this blog I will discuss the topic in more detail than Hurford manages to. I will first outline the type of conceptual abilities that Hurford attributes to animals and the evidence he presents for these abilities. Then I will discuss the best arguments against attributing concepts to non-linguistic creatures and evaluate the extent to which these arguments undermine Hurford’s position on concepts for non-linguistic animals. In the final sections I will severely critique the arguments against attributing concepts to non-linguistic creatures. Overall I will argue that Hurford’s position is a defensible one and that criticisms against attributing concepts to non-linguistic creatures largely fail.

Hurford presents a series of important points that he thinks indicate that animals have concepts. A key point for Hurford is the idea of a Detached Representation. A Detached representation is a representation of an object that is no longer present. These Detached Representations involve an Index (a kind of pointer of all the properties associated with a remembered object); the index keeps memory of one object apart from other remembered objects. He argues that an index and its properties are the fundamental basis for the logical distinction between functions and arguments. So the idea of a detached representation is a key ability which helps animals form concepts.

He goes on to argue that some animals not only remember objects but they also remember events. He claims that ‘episodic memory’ is key in this particular skill. He cites experimental data (Raby et al. 2007) which indicates that scrub jays have episodic like memories. They remember the WHAT WHERE and When of events significant to them (ibid. p.67). This episodic memory only lasts a day or so, and is therefore not as powerful as human episodic memory which can last for many years. Non-human animal episodic memory may be limited (in the case of remembering events) but some non-human animals can remember objects years after they encountered them, for example elephants.  This indicates that remembering events is a much more difficult and complex process than remembering objects.

Hurford cites the experimental work of Duane Rumbaugh on ‘reverse learning’ which indicates that some animals can apply an OPPOSITENESS OPPERATION to a rule. Rumbaugh’s comparative experiments were done on apes and monkeys. Now obviously apes are more closely related to humans than monkeys. Rumbaugh’s experiments involved getting the monkey’s to associate a reward with a particular stimulus A and not with a stimulus B. Both Monkeys and Apes can learn this trick pretty easily. Once they have conditioned in this way the researchers reverse the procedure and reward stimulus B and do not reward stimulus A. What is interesting is that when the reversal is taught monkeys need to slowly and painstakingly unlearn what they learned in the first stage before they can learn the procedure in stage two (ibid p.71). Furthermore, the better the monkeys have learned the first procedure the longer it takes them to learn the second procedure. Surprisingly the opposite situation occurs for Apes the better they have learned procedure one the easier they can learn procedure two. The explanation given for this strange fact is that Apes have an abstract ability to reverse rules they have learned by applying what the experimenters call Oppositeness Operation on the rule. This level of abstraction is something apes can perform which most monkeys cannot and Hurford thinks that this ability of Apes may indicate a progression towards the type of conceptual abilities only humans seem to have.

So detached representations made possible by episodic memory and the Oppositeness Operation are used by Hurford as evidence of that non-human animals have concepts. Hurford grudgingly admits that one could call the animal abilities proto-concepts, which provide rich representational information about the world, but basically he argues that the evidence he provides is evidence of conceptual abilities in animals. Because of the limited space in his slim book which was only intended for popular consumption Hurford obviously only tipped the ice-berg in his evidence for conceptual abilities in animals. He touched on the evidence from Alex the Parrett who was very skilled at picking out abstract properties of objects as further evidence of animal conceptual abilities. He could have multiplied examples; the truth is that there is plenty of behavioural evidence which animal ethnologists and cognitive scientists think indicates non-human animals have concepts.

Yet despite the fact that it is pretty much a consensus in science that a wide variety of non-human animals have at least some conceptual abilities, as Hurford correctly noted a lot of philosophers disagree with this view. A variety of different philosophers have expressed scepticism about whether non-human animals have concepts: Quine, Rorty, Brandom, and Donald Davidson amongst others. Davidson has probably gone deeper into the matter than any philosopher. So it is probably best to evaluate his arguments against non-human animals having conceptual abilities attributed to them.

Davidson makes a number of arguments which he argues make it implausible to think (he admits that his arguments don’t amount to a proof on the matter) that animals are concept mongers.

  • Attributing concepts to animals because they can reliably distinguish aspects of their environment leads to a situation where one is forced to attribute concepts to flowers who can distinguish sunlight from non-sunlight, or thermostats which can distinguish degrees of heat. In none of these cases is reliable discrimination of aspects of an environment an indication of having a concept of x or y. To have a concept one needs to be able to make claims about x or y and justify them to others.
  • In order for a creature to have one concept it needs to have many other concepts connected to it. Having a concept of a dog, for example, entails having the concept of an animal, and the concept of an alive thing, and a concept of entities who do not count as dogs etc. To have a connected series of concepts is to have a series of beliefs, desires and intentions towards these concepts. These abilities cannot emerge without an intersubjective language that can be used to make claims about the world that can be judged true of false.
  • The Referential Opacity Objection: Davidson mentions the case of a dog running after a cat; while chasing the cat he momentarily loses sight of the cat and sees the rustling in a large tree near him. He starts barking at the rustling tree. Meanwhile the cat he was chasing has ran up a different tree. Davidson notes that someone viewing this situation may make the claim that this shows that the dog is entertaining the proposition ‘The dog is up the tree’, and holds the attitude of belief towards the proposition. Davidson notes that there is a difficulty with adopting this approach. In natural language quantifying into belief sentences results in the sentences becoming referentially opaque and this effects the truth value of the statement. Consider the sentence (A) Bruce Wayne is the crime fighter Batman who caught the Joker. Now consider sentence (B) John believes that Bruce Wayne is the crime fighter Batman who caught the Joker. Sentence A is true because Batman and Bruce Wayne are the same person; the statement is simply one of identity. While sentence (B)’s truth value changes depending on what the beliefs of John are. Attaching ‘X believes that’ changes the sentences truth value from referentially transparent to referentially opaque. However in the case of the dog chasing the Cat up the tree things are different. Suppose in our imagined case that the cat has gone up a tree and the dog is barking at that tree. In this case the sentence (A) ‘The cat is up that tree’ would be true. Now suppose the tree the Dog was barking at was an oak tree. The sentence (B) ‘The cat is up that Oak tree’ will be true as well. If the tree the cat is up is the tallest tree in the forest then the sentence (C) ‘The cat is up the tallest tree in the forest’ will be true. As we know sentences change from a transparent to an opaque reading when attached to belief contexts. However when we attach belief sentences to non linguistic creatures this logical asymmetry does not occur. So take (D) ‘The dog believes that the cat is up the oak tree’, (E) ‘The dog believes that the cat is up the tree’ (F) ‘The dog believes that the cat is up the tallest tree in the forest’. In these sentences we have a series of attributions to the dog and we have no evidence to say that D-F are true or false, this contrasts with sentence 2 above where we can discover what John believes by simply asking him and getting him to justify his claim. There is little evidence that the dog has any of the above concepts of ‘tree, cat, forest, tallest, oak tree, or any of the other possibilities attributable to his behaviour. Thus we have little reason to attribute any concepts to the dog in this case. As Davidson puts it in his (1982) ‘Rational Animals’:

“In a popular if misleading idiom, the dog must believe, under some description of the tree that the cat went up the tree. But what kind of description would suit the dog? For example, can the dog believe of an object that it is a tree?  This would seem impossible unless we suppose the dog has many general beliefs about trees:  that they are growing things, that they need soil and water that they have leaves or needles that they burn. There is no fixed list of things someone with the concept of a tree must believe, but without many general beliefs, there would be no reason to identify a belief as a belief about a tree, much less an oak tree. Similar considerations apply to the dog’s supposed thinking about the cat.” (Rational Animals p. 98)

  • In his (1995) ‘What Thought Requires’ Davidson offers further arguments against the view that non-linguistic animals have concepts. He argues that even if a creature has a proto language consisting of a finite number of names, and predicates, and the ability to use demonstrative devices to pick out an in principle infinite amount of objects. And even if this creature could use truth-functional connectives to give this proto-language creativity (the ability to use the finite vocabulary in a way to make an in principle infinite amount of statements). We would still not be justified in attributing to such a creature an ontology according to Davidson. This is because for Davidson such a creature does not have a concept of an object. To have a concept of a creature needs to have grasped the syntax of quantification. It is according to Davidson these devices which help us construct complex predicates and map them on to objects in the world. If a creature cannot do this then Davidson believes that we are not justified in thinking that the creature is a concept mongering thinker until they have mastered all these apparatus and only creatures who have a language have this apparatus.
  • Davidson offers one final piece of evidence which he believes shows that only language using creatures have concepts. He argues that in order for a creature to have a concept of x the creature needs to have a concept of truth. The creature needs to know when he is correct or incorrect in applying the concept. If a creature has no concept of when he is using a concept correctly or incorrectly then we have no reason to think that the creature has said concept. On Davidson’s picture a creature has the concept of true and false applications of a concept only when he can triangulate on a shared object of experience, with another self. So a necessary condition of being a rational creature with concepts is having a self, another and a shared object of experience. The other and the shared world are what give the creature the ability to recognise that he has applied a concept incorrectly. This is provided by and only by having another subject (or community of such subjects), against whom one can check his reactions to the stimuli for correctness. Being able to triangulate on shared objects of experiences and make claims that are true or false is only possible according to Davidson for creatures who have a language. Therefore he concludes only of creatures who have a language are we justified in saying they have concepts.

I have obviously condensed Davidson’s arguments above but this is necessary in a short blog. Overall I think his arguments do not really touch the claims made by Hurford on animals having concepts. I will now briefly say what I think is wrong with each of Davidson’s arguments and show how they do not seriously undermine any of Hurford’s claims.

Davidson’s first complaint, that if we attribute concepts to non-linguistic creatures because they can reliably distinguish between aspects of their environment then we will be forced to attribute concepts to thermostats and flowers, is a serious point. It shows that we cannot simply use reliably distinguishing between aspects of the environment as a standalone justification for attributing a concept to something or someone. However in Hurford’s case he is not merely pointing to a creature reliably distinguishing between x and y. Hurford provides evidence that creatures can remember objects which are no longer present. That some creatures (Scrub-Jays) not only reliably distinguish between x and y but can use this distinction to judge that since x is perishable there is no point in digging it up, while since y is not perishable I will dig that up. This requires more than just the ability to distinguish between x and y but the ability to use this distinction to make inferences. If Davidson denies that this is the case he needs to find a way of predicting the behaviour of scrub-jays in a more accurate manner without using the language of the scrub-jay forming inferences. Incidentally Davidson mentioned that a creature being surprised by something may be good evidence that they believed that x is the case, and are surprised that x is false. The scrub jay experiment provides a way of testing whether they are capable of being surprised by deceiving them into thinking you are burying non-perishable food and actually burying perishable food. Monitoring the behaviour of the scrub-jay when it finds nothing may be instructive (though it would be far from conclusive).

Davidson’s second argument that order for a creature to have one concept they must have many concepts, which hang together in interconnected web of beliefs and intentions; is far from conclusive. I agree with Davidson’s claim that concepts are connected together in an interconnected web. But I don’t see how this fact has any consequences for the attribution of concepts to non-linguistic animals. If we start with the assumption that animals are genetically programmed with certain key concepts; causality, agency, object, dominance, submission (depending on the particular animal), and use these concepts to develop basic theories about the world. We have a perfectly reasonable schema with which to try and empirically test our assumptions. It could be argued that by arguing for this innate set of concepts I am begging the question against Davidson. I agree that I am making a question begging assumption; but one that can be tested by experimental evidence. If Davidson wants to start with the opposite assumption he is welcome to; as long as he is willing to test his assumptions empirically as well. However merely pointing to the fact that concepts are holistic is not a sufficient basis to provide any evidence pro or con for non-linguistic animals having concepts.

Davidson’s third argument amounts to nothing more than an appeal to intuition. He says that he can think of no way of telling whether a dog has a concept of ‘tree’, ‘an Oak Tree’, ‘the tallest tree in the forest’ etc. He says that since we have no way of deciding which if any of the concepts to attribute to the dog, then there is no fact of the matter and we have no justification for attributing any concepts to the animal. This argument simply doesn’t work. Davidson doesn’t even consider whether we can use a variety of different behavioural tests to whittle down what concepts the dog has of tree. He just assumes without trying that it cannot be done and draws his negative conclusion. This appeal to incredulity is a dreadful argument. We are only justified in drawing such negative conclusions if years of testing show that we cannot draw the relevant distinctions. But there is no reason to draw the conclusion, as Davidson does, by simply stating that we know in advance that there is no empirical way of deciding the issue.

Davidson’s forth argument that we have no reason for attributing an ontology to a creature until he has mastered quantification is derived directly from Quine.  Quine, like Davidson, argued that mastering the syntax of quantification was essential to a creature if we were to attribute an ontology to him. And that mastering the syntax of quantification was made possible through learning a language. This claim has been tested empirically.  In their 1990 paper ‘‘Ontological categories guide young children’s inductions of word meaning’’[1] Soja, Carey, and Spelke tested whether the ontological distinction between objects and non-solid substances conditions the projection of word meanings prior to the child’s mastery of count/mass syntax.  If before the child has grasped the syntax of quantification the child differentiates in the above manner, this shows that the child is not generalizing the word-based perceptual similarity, but is doing so based on the type of object he is presented with. So, for example, if he was generalizing according to an innate perceptual similarity quality space which focuses on shape then why does this not work for substances? The answer is because the child recognises that objects and substances are distinct ontological categories. Soja et al. summed up their results as follows:

“In sum, the children chose according to object type when the stimulus was an object and according to substance type when the stimulus was a non-solid substance. There was no effect of the syntactic context: performance was neither facilitated nor hindered by the additional syntactic information.

The data from Experiment 1 show that different inferences about the meaning of a newly heard word are drawn according to the ontological status of its referent. If the word refers to an object, the child’s projection respects shape and number, and ignores texture, color, and substance. If the word refers to a non-solid substance, the child’s projection ignores shape and number, respecting texture, color and substance. (ibid., 192)”

 

From this experiment, Soja, et al. claim to have shown that Quine’s view of how children learn language is incorrect because the experiment shows that, contrary to what Quine claims; children do indeed have a distinction between different ontological categories prior to grasping count/mass syntax (which for Quine and Davidson is the same thing as mastering the syntax of quantification). It also shows that these innate ontological categories are what help a child learn a language and not the apparatus of quantification. Soja et al.’s experiment purports to have shown that children learn words according to ontological distinctions which they exhibit knowledge of prior to learning a language.  This experiment indicates that, despite what Davidson and Quine, think a child has an ontology prior to mastering the syntax of quantification, and this ontology helps them learn their language. The experiment also shows that Davidson’s claim that a creature cannot have an ontology without having a language and the rules provided by the syntax of quantification; is false.

Davidson’s fifth point is about triangulation is simply too vague to evaluate. Non-linguistic creatures do seem to prefer dyadic interactions when communicating. But if Davidson wants to make any large scale claims on the topic he will need examine detailed cases like ‘Vervet Monkey’s’ seeming to triangulate with conspecifics,  and discuss things like the bee waggle dance etc. It is hard to even evaluate Davidson’s[2] claims here until he fills in much needed detail.

Overall Davidson’s arguments do not really present a serious challenge to Hurford’s views on non-linguistic animals having conceptual abilities. In my next blog I will evaluate Hurford’s discussion of how language developed from the conceptual abilities of our ancient ancestors.

[1] I have discussed this experiment in detail in my blog-post ‘The Indeterminacy of Translation and Innate Concepts’. Here I will just point to the conclusion of the experiment those looking for more detail should read my earlier blog-post.

[2] Obviously Davidson is dead so won’t be filling in any details but I refer to people who are convinced by Davidson’s arguments and the work they need to do on this topic.

A REPLY TO SOME CRITICISMS OF MY HURFORD BLOG

My most recent blog ‘Hurford: Frogs, Flies, Dennett and Fodor’ discussed what I thought were some weakness in Hurford’s discussion of the proto-concepts of frogs. In a response to my blog linguist and philosopher Dr David Pereplyotchik (hence forth David P) offered some criticisms which warrant a detailed discussion as they help clarify my position. I will first discuss a criticism made by David P which I am in agreement with; this criticism reveals some sloppy thinking on my part. Now while a blog isn’t held to the same standards as a peer reviewed paper there is no excuse for the sloppy thinking in this case and I am grateful to David P for pointing it out to me:

David K.

“But talk of meaningful percepts is out of place in both cases. With this in mind one wonders why Hurford speaks of the frog having meaningful percepts this implies conscious awareness of the black dots.”

David P.

“Why would working at the personal level require recognizing only conscious awareness?  Why not just awareness, conscious or otherwise?  Are you assuming that nonconscious awareness is an oxymoron?  Or that it must be sub-personal?  Neither of those assumptions are defensible.”

David K:

“I think you have a point I did presume that personal level descriptions involve conscious awareness. This is unjustified and I will need to rework it.”

David P:

“A lot of people think that personal/sub-personal lines up with conscious/nonconscious and with occurrent/dispositional.  But there are counterexamples to all these claims.  Thoughts and perceptions are personal-level states, but routinely occur nonconsciously, as priming experiments and implicit bias experiments reveal.”

 

David P’s criticism here was correct; I did incorrectly line up the conscious/nonconscious and the occurrent/dispositional distinctions. This is a pretty elementary mistake by me. As David P correctly notes we often attribute non-conscious thought to people in both experimental and practical areas of discourse. As a guy who has studied Psychoanalysis, Neuropsychoanalysis, and Cognitive Science I should not have made the mistake of assuming that personal level ascriptions are always conscious as opposed to unconscious or nonconscious thoughts. Our folk psychological locutions often make use of unconscious thoughts in explanations[1].

However my sloppy use of language aside I do think that Hurford needs to be clear of what level he is giving his description of the frog’s behaviour at. When Hurford speaks of a frog having a meaningful percept of a fly, he seems committed to viewing the frog’s behaviour in intentional terms; he is using the language of beliefs/desires (whether conscious or unconscious beliefs and desires). He is speaking of the intentional object of the frog’s fly snap being a fly. I see no reason why Hurford thinks that an intentional as opposed to a pure reflex explanation is required. I further do not understand why, even if we did offer an intentional explanation, we would need to assume that the intentional object of the snap was a Fly as opposed to an Ambient Black Nuisance. So in short I don’t think that Hurford justified switching to an intentional locution, nor did he justify the intentional object he claims the frog is aiming at. This is a pretty serious situation. Hurford uses the frog’s proto-concepts as a foundation in his explanation of how concepts arrived on the scene, and how language evolves. So I think he needs to clear up this matter before moving on to an explanation of complex conceptual abilities.

David P also criticised some other claims I made. I will discuss these criticisms below as I think they help clarify my position. Here I think the fault lies with my unclear mode of expression as opposed to my making a mistake in my criticism of Hurford. In my blog I made the following claim which David P disagreed with:

David K:

“When one hears that a frog has a percept of a fly one thinks of a frog forming a meaningful image of a fly. This image is meaningful because it moves a certain way has a certain shape and size etc”

David P:

“This is bizarre, on two counts.  (1) The frog is perceiving the fly, not an image of the fly.  The frog wants to eat the fly, not an image of a fly.  (2) The fly moves and has a shape.  That’s obvious.  Does the image move and have a shape?  Not so obvious. You seem to be relying on a veil-of-ideas view of perception, on which perceiving the material world can only be done via a prior awareness of some mental representation (e.g., an image).  There’s no reason to hold this.  The presence of a mental representation is sufficient; one need not be, in addition, aware of it. You’re also endowing those representations with properties like shape and motion, which they could not plausibly have — particularly if those representations are neurally encoded. Why not say instead that what the frog perceives is the fly, but that how it does so is by using mental representations (of which it need not be aware)?”

David K:

“The frog wants to eat a fly not an image of a fly” You are just helping yourself to pre-theoretic intuitions that the fly can make that distinction. There is no evidence that this is the case. You are helping yourself to a view that no evidence supports.”

David P:

“I didn’t intend to take a stand on whether the frog wants to eat a fly or a small moving black dot.  Whether the frog can make that distinction is, I agree, an interesting question.  What I was saying was that whatever it is that the frog wants to eat, it’s not a mental image.  The reason is that the frog has no concept of mental images.  There is no behavioural or neural evidence that the frog is ever aware of mental images.  There may be evidence, of course, that the frog has such images–i.e., that the format of its representation is (sometimes) imagistic, but that’s a very different claim.  Having a mental image is not the same as being aware of that image, or perceiving it, let alone wanting it. Unless, of course, you assume that all mental images must be conscious, but there is abundant reason to believe that this is not so.”

David K:

I am not presupposing the truth of a veil of ideas theory of perception. Nothing I said indicates this.”

David P.

“You said that when we talk about the frog having a percept of a fly, it’s natural to think that the frog is perceiving a mental image.   This is precisely the veil of ideas view, according to which perceiving a material mind-independent object (e.g., a fly) requires/involves perceiving a mental image.  Perhaps you did not intend this, but your language certainly points in that direction.”

 

Firstly, contrary to what I said in my reply to David P, my mode of expressing myself did indicate that I was endorsing a veil of ideas view perception. This was not however my intention. The point I intended to make was that people in general pre-theoretically make this veil of ideas assumption (not that I endorse the assumption)[2]. So given the fact that this veil of ideas approach is common it is a natural way to interpret what Hurford may mean by claiming that a frog has a meaningful percept of a fly[3]. In David P’s reply to me he makes the point that the frog wants to eat the fly not the image of a fly. I replied to him that we have no evidence that the frog had a concept of a fly so it would be bizarre to attribute the desire to eat a fly to a frog. He said he didn’t want to take a stand on the frog versus ambient black nuisance issue. He was just arguing that there was no behavioural or neural evidence that the frog had a mental image of a fly. But my point, (admittedly badly expressed), was that there is no evidence that a frog has a mental image of a fly or concept of fly, or a concept of an ambient black nuisance. We can explain the frog’s behaviour entirely interms of unconscious, non-intentional reflexes. It is unclear to me at least why Hurford starts talking of frog proto-concepts, and meaningful percepts of flies. This is why I discussed the debate with Dennett and Fodor on the fly swatting issue. I sided with Dennett’s conception of the selective environment as opposed to some frog mentalese deciding on this issue what the fly was swatting at. I am unclear what Hurford thinks is going on with the frog.

A further issue David P disagreed  with me on was on the nature of phenomenology:

David K:

“You are right representations don’t have shape and motion at the sub-personal level but they can do at the personal level as can be seen by analysing the phenomenology; if you want to ignore the phenomenology that is fine, but the issue needs to be addressed  and Hurford doesn’t address it explicitly. But he needs to.”

David P:

“They don’t have those properties at any level.  They are neural states and processes, which are not the sorts of things that can have those properties.  Perhaps they have analogous or structurally similar properties, but that’s a very different sort of claim. I do not intend to ignore the phenomenology.  Focusing on one’s conscious perceptual states does not reveal them to have colour or shape.  It reveals them as representing colour and shape, and perhaps as having properties by means of which that representation is accomplished.  But my perceptual states are not red, in the way that a tomato is red. Again, though, you’re right that Hurford needed to be clearer.  No doubt about that.”

Again in my haste here I didn’t write as clearly in reply to David P as I should have. I argued that at the personal level people have representations of movement and shape. This was way too vaguely put by me. If I want to generalise as to what people experience at the personal level I will need to provide statistical evidence of their verbal reports in both experimental and naturalistic settings. In my reply I was focusing on a sub-section of the personal level of explanation that of conscious experience. And I made sweeping generalisations about people’s reports on their subjective experiences. This is entirely unjustified. I have long written on the danger of the typical mind fallacy; the fallacy of generalising from your own experiences of mental reality to the claim that all people experience the world this way. People making the typical mind fallacy might think that because they have an inability to form a mental image; other people who speak of having mental images are speaking metaphorically[4].

In my case I wasn’t generalising from my own experiences to the assumption that all people have similar ways of experiencing the world. Rather I was assuming that because a lot of people I have spoken seem to uncritically accept the veil of ideas approach then they would describe their experiences in a similar way to the way I do above. So such people would argue that they experienced mental content of colour, motion etc. This is despite the fact that at the sub-level no such ideas exist.

I should note two things here. (1) A lot of phenomenologists have noted that “the veil of ideas” is not a description of how people actually experience the world but is a theorists claim we have inherited from Descartes and Hume[5]. There is some justification to this claim of phenomenologists and this casts doubt on my claim about how people things at the personal level[6]. (2) If I want to make claims about people’s reports of how they experience the world I need to reference the literature of experimental philosophy. Do people in general report experiencing a veil of ideas with movement or colour, or do they experience representations of movement and colour which are not coloured? Dogmatic claims by me on this topic are out of place. The truth is I haven’t researched the issue sufficiently and will need do further research before commenting further.

When David P says that personal level descriptions do not have properties of movement and colour in general because the neural states and processes are not the sort of things that can have these properties he is making a theoretical claim. I assume he is claiming that there is nothing else for our mental states to be than brain states (unless we want to be dualists). So even if people claim to experience X or Y then if there is no way the brain can implement these experiences then we are justified in treating the person’s verbal report as a theorist’s (inadvertent) fiction. This seems to be a similar view point as the one adopted by Dennett. In a recent mail David P tells me he is influenced by Sellars attack on the myth of the given. So I presume that is why David P won’t accept verbal reports from subjects that contradicts known brain mechanics. I think his approach is a sensible one and I am largely in agreement[7]. What I should have said is that at a personal level people will report experiencing movement and colour in their minds eye[8]. Again this is a lesson on being clearer on what I meant to say and the way that I formulated my view.

In some ways I think that the difficulties I had in discussing the nature of reports of language using subjects views on their own intentional states and conscious experiences underlies the need for clarity in Hurford’s book. Hurford moves to a personal level description of the intentional object of a Frog’s fly snap without any explanation of why this move is justified. I think if his evolutionary explanation of the origin of language is to get off the ground he needs to address this issue before moving on to more complex claims.

Thanks to David P for his helpful criticisms of my last blog. In my next blog I will discuss the following claim by Hurford:

“On the one hand, some, particularly some philosophers, deny non-humans can ever have concepts. For them a concept is essentially bound up with language; only creatures with language, so only humans, can have concepts. This view is now largely eroded.” [9](Hurford: The Origins of Language p.61)

Hurford doesn’t go into the arguments of philosophers who strongly equate having a concept with having a language. In my blog I will present the arguments as put forth by people like Davidson and Brandom and critically evaluate them.

[1][1] David P uses nonconscious as  synonymous with the cognitive scientists unconscious. He doesn’t want to use the word unconscious because of its Freudian connotations. It is worth noting that Neuropsychoanalysts like Mark Solms use the term unconscious in the similar way to modern cog scientists and have dropped the old Freudian view equation of the unconscious with repressed emotion (Freud to be fair did have a concept of the unrepressed unconsciousness which Matte Blanco discusses in detail in his ‘The Unconscious as Infinite Sets’)

[2] Here I am making a claim about how the majority of people interpret the idea of a meaningful percept. The truth is that this claim remains entirely anecdotal. The fact is if I want to justify this claim I will need empirical data. Experimental philosophers are quite good at drawing out what people pre-theoretically believe about the mind. I will need to analyse this data closely to demonstrate that people do indeed presuppose a veil of ideas psychology. On this issue the question is very much open.

[3] It could be argued, that I am being careless in assuming that because it may be the case that people in general make the veil-of-ideas assumption; that Hurford is making the same assumption. A trained scientist will generally have their intuitions to some degree modified by experience. This fact makes it even more difficult to interpret what Hurford means and underlines the need for him to be more explicit of what a “Meaningful Percept” is for him

[4] William James and Francis Galton have both spoken of the dangers of this fallacy. David Berman has demonstrated how the debate between Locke and Berkeley on abstract ideas is because of the different types of mind of both theorists. I have discussed the topic in detail in my  long blog post ‘Dennett and The Typical Mind Fallacy’, and in less detail in my ‘Intellectual Disability and Radical Translation.

[5]  See Evan Thompson ‘Mind in Life” chapter 10 and references there in.

[6] Despite the good work done by a lot of phenomenologists they do not represent the variation in how people report seeing the world in sufficient detail in this authors view.

[7] Obviously this approach is contentious see Evan Thompson, Chalmers, Alva Noe for debates on the topic.

[8] And justified this claim with a corpus analysis of how people actually speak on these topic.

[9]

Hurford: Frogs, Flies, Dennett and Fodor

The linguist James R Hurford recently wrote a popular introduction to the evolution of language ‘The Origins of Language: A slim guide’. As popular introductions go the book is one of the best out there. Hurford manages to deal things like the FOXP2 gene in a nuanced manner, likewise when discussing the KE family who supposedly have language specific impairments but spared intelligence he deals with the evidence in a balanced manner. He also discusses in detail things like the importance of imitation and social learning through triangulation (topics sometimes ignored by linguists) this approach is at odds with the highly polemical approach of people like Pinker and Chomsky who go out of their way to pretend that all the evidence supports them and none supports theorists of a different persuasion. Overall, I think that Hurford’s book is one of the best popular level introductions to the topic out there. However I did have some difficulties chapter 4 of the book where Hurford discusses animal concepts as precursors to human language development.

Hurford thinks that the correct way to start with discussing simple concepts is to discuss simple examples of animals classifying things in their environment. A necessary (but not sufficient) condition of a creature having a concept is their being able to classify things in their environment. A lot of reflex behaviours of creatures satisfy this criterion. He notes that Frogs have a reflex that ensures they only swat at certain objects moving in certain ways. So the frog’s brain is directly connected to the tongue and it swats when objects within a certain proximity move a certain way. All of this is common place and has been widely discussed in the literature since Lettvin et al. ‘What the Frog’s Eye tells the Frog’s Brain’ (1959).

Hurtford then goes on to make the following claims about the frog swatting the fly. For the Frog the class of events (moving black objects) has meaning or significance, but it would be a mistake to ask what the moving insect was significant of to the frog. He asserts that the moving insect is not a symbol of anything to the frog. But he argues the frog’s brain is partitioned to organise things into flies and not flies. For Hurford these frogs have internal mental representations, of classes of objects, events, and states of affairs. He is careful to note that the frog’s perception of a fly goes straight to his motor mechanisms and the response is not stored in memory. So in this sense he argues the frog does not have a full concept of a fly. The frog can though form a percept of the fly and this percept has significance for the frog. And the ability of the frog to form significant percepts is for Hurtford the start on the road to developing fully fledged concepts (‘The Origins of Language’ p. 62).

Now when one hears that a frog has a percept of a fly, one thinks of the frog perceiving a meaningful image of a fly. This image is meaningful because it moves a certain way, has a certain shape and size etc. However Hurford warns that this is not how he intends us to interpret his claims. He assures us that he does not believe that there are pictures in the head which the frog inspects because pictures are two static and are made of the wrong stuff. However his explanation from this point in interms of networks of neurons connected to each other while mainly accurate (though simplified), seems to move from the personal level of explanation to the sub-personal level without offering even a cursory defence of doing so.

At the sub-personal level we can talk of the brain interpreting particular patterns in its environment triggering certain reflexes like fly swatting. This is similar to the way we can talk about a thermostat interpreting particular patterns in its environment which cause the thermostat to behave in certain ways. But talk of meaningful percepts is out of place in both cases. With this in mind one wonders why Hurford speaks of the frog having meaningful percepts, this implies conscious awareness of the black dots, however he has not provided evidence to support this picture. Likewise when he speaks of frogs distinguishing between flies and non-flies this gives a confused picture. The frog is distinguishing between dots of a certain size and motion but as it has no concept of a fly, talk of the frog distinguishing between flies and non-flies is way off base. He also argues that the frog is not inspecting pictures in his mind (because pictures are made of the wrong stuff), while simultaneously arguing that the frog is experiencing a significant percept that is the foundation for more complex conceptual abilities. These claims are in tension with each other and he needs to resolve this ambiguity by either dropping his intentional locutions which imply conscious awareness or if he does indeed think that frogs have flies as the content of their experiences he needs to justify this claim not merely assume its truth.

Here I am not saying that Hurtford needs to solve the hard problem of consciousness before proceeding further. But I think he needs to be clearer in what he means when he speaks of frogs having meaningful percepts. If we think of the case of mental imagery and whether it is used in thought (see Kosslyn vs Pylyshyn) this debate is usually handled at the sub-personal level. So know that when people think of mental imagery and rotate some image in their “minds-eye” topographic images are formed in the brain. However, the debates around these topics centre on whether the brain uses these topographic images on the brain in thinking or whether at bottom the brain uses a language of thought for all thinking. The point is that in these debates describing accurately the phenomenology of mental imagery is not considered important. Behavioural tests and neuroscientific studies are the tools typically used. People’s phenomenological reports are treated in different ways by different theorists. They are sometimes treated as mere verbal descriptions of the way things “seem” although no actual images are displayed to a subject (see Dennett 2003). Or the mental images can be treated in a realistic manner as something that a person actually experiences (something with real content), we can explain the neural correlates of these experiences but we cannot explain why these experiences arise from these correlates (See Chalmers). When talking of frogs distinguishing flies from non-flies, or of frogs having meaningful percepts Hurford needs to be clear whether he is speaking at the sub-personal level like Pylyshyn and Kosslyn or if he is speaking at the personal level about frogs experiences.

The debate around on this issue has played a big role in the philosophy of mind and I think Hurtford could have referenced the issue in his foot note. Dan Dennett has discussed this issue on numerous occasions and has argued that there is no fact of the matter as to what the frog intends when he swats at the fly:

“And to the extent that there is nothing in the selective environment that uniquely singles out a particular class of occasions, there is also no fact of the matter about what the frog’s eye report really means.” (Dennett: Intuition Pumps p.257)

“Suppose scientists gather up a small population of frogs from some fly-grabbing species on the brink of extinction, and puts them under protective custody in a new environment-a special frog zoo in which there are no flies, but rather Zoo keepers who periodically arrange to launch little food pellets past the frogs in their care. To the keepers’ delight, the system works; the frogs thrive by zapping their tongues for these pellets, and after a while there is a crowd of descendent frogs who have never seen a fly, only pellets.” (ibid p.258)

Dennett notes that what happens to the frogs in his thought experiment happens all of the time in evolution. It is a case of exaptation where a particular piece of machinery is selected for a different function. To make the case clearer Dennett supposes that in the new environment, variation in pellet detecting ability meant that certain frogs were more likely to survive than other frogs. He further argues that there was no particular moment when we are justified in saying that this is the point where what the frog’s eye report means changes. He argues that there is no fact of the matter about what a frog’s eye report means, and that it is a mistake to think that there is some determinate meaning encoded in the frog’s brain in terms of some kind of mentalese. The meaning of the black dots on the frog’s retina isn’t determined by some central meaner in the brain, rather it emerges gradually through shifts in environmental conditions. He argues that without the “indeterminate” variation in the triggering conditions of the frog’s eyes, selection for a different function would not be possible (ibid p.257).

In Fodor’s (2011) ‘What Darwin got Wrong’ (co-authored with Piattelli-Palmarini) he takes the direct opposite approach to frogs fly snapping and uses it as a tool to beat the concept of natural selection. Fodor discusses Frog’s eating flies interms of the ‘Selection-Selection for distinction’. He outlines Gould and Lewontin’s argument in detail and says that the argument has more serious consequences for adaptationism than they realised. To make this point they discuss the issue of the heart pumping and the heart making thump-thump noises. The heart plays vital role in pumping blood so evolutionary theorists typically argue that the heart was selected-for pumping blood. The problem is that every time the heart pumps blood it also makes thump-thump noises. Now on this issue evolutionary theorists will argue that thump-thump noises are free-riders which piggy back on the selected-for property of pumping blood. Fodor’s problem is that while he assumes that there is a fact of the matter as to whether trait T1 or T2 plays the primary function, he thinks that there is no physical mechanism in the TNS (Theory of Natural Selection) which can distinguish between T1 and T2 so he thinks that there is serious problems with the selection-for story.

Fodor argues that this problem of co-extensive traits appears in behavioural learning theory as well. When we perform conditioned response experiments on Rats we learn that certain responses can be elicited from certain stimulus. However, the problem is that particular responses are massively undetermined by the data of experience. So suppose in an experimental situation we train a rat in stimulus stimulation. Now to do this we reinforce the rat in the presence of a yellow triangle (Stimulus A) but don’t reinforce it in the presence (Stimulus B) a card with an X on it (Fodor and Piattelli p. 103). When the training is complete the rat will produce a particular response when and only when Stimulus A is presented. Fodor notes that in this situation when we ask what the rat has learned in this experiment we don’t know because the Rats behaviour is underdetermined by the data of experience.

Fodor correctly notes that using Mill’s Method of Differences we can whittle away various different possibilities of why the Rat is behaving in the way it is. He mentions learning theory to show how the situation between learning theory and the selection-selection for in the TNS are analogous. However in the case of learning theory we have clever experimenters who can whittle down the options to decide what the fact of the matter is. While in the case of TNS which theorists claim selects for trait 1 as opposed to 2, there is no mother nature to do the clever experiment, and of course in the course of evolutionary history when Hearts were developing there were no clever humans to do the relevant experiment either. We can do experiments now to decide what the fact of the matter is in terms of what is selected for, but mother nature would have been blind to this data. Fodor sums up his argument so far as being that the moral of the story is that to decide what is learned (or selected-for) we need to appeal to counterfactuals (which can be explored in experiments), but “Mother Nature” is blind to counterfactuals so cannot in principle “select-for” one of two co-extensive traits.

Fodor on the Naturalisation of Content:

Fodor argues that the selection-for problem is not limited to learning theory and the TNS but the same problem arises in theories of content. He tries to demonstrate where this problem arises in his theory of content. To do this he discusses the famous question of Frogs swatting Flies. Here is Fodor on Frogs:

“In a nutshell: if the assumption of local coextensivity holds (as of course, it perfectly well might), then fixing the cause of the frog’s snaps doesn’t fix the content of its intention in snapping: either an intention to snap at a fly or an intention to snap at an ABN (ambient black nuisance) would be compatible with a causal account of what the frog has in mind when it snaps. So causal accounts of content encounter a selection-for problem: If something is a fly if and only if it is an ABN, the frogs behaviour is correctly described either as caused by flies or as caused by ABNs. So, it seems, a causal theory of content cannot distinguish snaps that manifest intentions to catch the one from snaps that manifest intentions to catch the other” (Fodor and Piattelli p. 108)

Like in the case of TNS and Learning Theory Fodor argues that the solution is found by appeals to counterfactuals. Fodor goes on to argue that since fly swatting is a behaviour of phenotypes then the problem is one that is raised for the TNS. The TNS is not capable of deciding whether Frogs were selected for Fly swatting or ABN swatting. He reiterates the point again as follows:

“Rather it’s that the individuation of traits depends on the truth of counterfactuals: since (by assumption) every fly-snap in the actual world is an ABN-snap and vice-versa, selection between fly-snappers and ABN-snappers must be sensitive to the counterfactual consideration that ABN-snapping gathers no flies in worlds where the ABNs are BBs, rather than flies. It’s a nice thing about intentional systems that they are sensitive to merely counter factual contingencies. It means that beliefs can take account of what the outcomes of actions would be if…and the believer can act accordingly.” (ibid p.121)

Of course as Fodor emphasizes again and again since selection is not an intentional process it cannot use counterfactuals to decide whether the fly is selected for fly snapping or ABN snapping. So Fodor thinks that this reveals that there is a problem with the TNS, and causal theories of content.

Now the obvious reply to Fodor is that there is no fact of the matter as to whether the Frog is snapping at flies of ABNs. Once we accept this fact we dissolve the problem that Fodor thinks is so perplexing. If the selective environment doesn’t distinguish between ABN’s and Flies then there is no fact of the matter as to whether the fly intends to snap either so there is nothing that TNS is missing out on in this case. Fodor, of course disagrees with the claim that there is no fact of the matter about what frogs intends. He argues that it is plausible that Frogs intend to snap at flies, but says that if you don’t agree on this you can move up the phylogenetic ladder until you are satisfied that the creature intends/believes that x is the case.

Dennett correctly notes that that the problem with this suggestion of Fodor’s is that it assumes that there is a clear fact of the matter for creatures a step up the phylogenetic ladder from frogs either they have concept x or they don’t. However there is little reason to think that this is true. Evolution doesn’t work in the clear cut way that Fodor’s intuitions would like, this is not a black mark about evolution, it is a black mark against Fodor’s pre-theoretic intuitions.

My primary point here isn’t to critique Fodor’s views on evolution (I have already done so in an earlier blog), nor is it to say that Dennett is 100% correct in his views on what a frog’s eye tells its brain. Rather I want to point out that Hurford passes over the topic of Frog percepts too quickly and uses language which is ambiguous as to whether he was using personal or sub-personal levels of explanations. I think he could have at least referenced some of the literature on this topic in a foot note. That said overall his book was one of the better popular explanations of language evolution out there. Now that I have discussed his views on proto-concepts I plan a review of the entire book later.

CONSPIRACY THEORIES, IDEOLOGY AND HONEST DISAGREEMENTS

In discussing certain scientific topics, despite the best of intentions, people cannot reach agreement. Intelligent honest people interpret the same data differently, they sometimes have different data from each other, and because of the theoretical commitments they make, they are blind to certain facts. In these situations there is little to be done but to have open debate, to formulate the respective theories in as clear a manner as possible, and to decide where theoretical commitments lie and where there are differences of fact. Once this is done it may be possible to determine what experimental and empirical data can decide these matters. But underdetermination being what it is it is not typically possible to have a demonstrative proof that a particular theory is correct and its rival is incorrect. In practical situations underdetermination is not a problem, established scientific theories, like for example, the theory of evolution have such massive predictive and explanatory power over their rivals that they don’t have realistic rivals.

In some cases when theorists do not agree with each other both sides may feel that the evidence clearly favours their theory. They may feel that their theory is more consistent with known facts, that their theory makes more accurate predictions, etc.  In this case the temptation is to accuse the person you disagree with of having some ideological commitment which blinds them to the obvious facts.  This conspiracy theory approach is quiet common in the history of academic discourse. Historically when Marx’s theories were criticised it was pointed out that those criticising Marx probably had some material interest they were trying to defend, so their criticisms could be dismissed on those grounds. Freudian’s likewise sometimes responded to criticisms of psychoanalysis by psychoanalyzing the critics of Freud. The critic of Freud was analysed as having issues with authority figures stemming from childhood experiences. With the supposed motive of the critic of psychoanalysis exposed the criticisms could be dismissed. When critics of Behaviourism complained about the method they were interpreted being closet Cartesians who desperately wanted the soul to exist. Obviously not all Behaviourists, Marxists, and Psychoanalysts behaved in this manner, but a lot of them did use this approach and unfortunately it did stifle legitimate criticism of the various disciplines.

Of late I have noted this manner of dealing with disagreement has emerged in evolutionary psychology. So, for example, in evolutionary psychology forums when various issues are discussed, people are quiet frequently accused of adopting a particular position because of some supposed psychological motivation. So if a person mentions something to do with sexual harassment they are accused of unconsciously signalling to potential mates that they are nice guys. Or if people are debating about the degree to which humans are innately violent or innately cooperative they will be accused of doing so because of unconscious political motivations. Thus if a person thinks that the evidence shows that humans are by nature more cooperative than violent (obviously I am simplifying here) they are accused of defending a Rousseau view of human nature, and of being a left wing extremist who is denying obvious fact to support a political belief. On the other hand if a person thinks that the evidence supports violence as opposed to cooperation being dominant they will be accused of doing so because they are Hobbes supporter who are defending a right wing world view despite the obvious facts. In these various debates mountains of evidence is presented by both sides of the debate. However quiet a lot people don’t read the counter evidence and merely attack the motivations of the person presenting the evidence.

What is strange in all of this questioning of people’s motivations is the fact that we all have motivations (some of which we are not aware of), we are subject to unconscious biases, blind spots etc. As human beings we need to be constantly aware of the fact that we do not reason perfectly, hence the importance of empirical checks on our theories, and clear rational debate with those we disagree with. When we dismiss those we disagree with as being ideologically motivated and think that this is justification for ignoring the data they present we forego the possibility of learning.

It is a simple logical point that it is a fallacy to assume that the origin and motivation a person’s reasons determine the truth or falsity of the reasons the person presents. It may be possible to tell a Freudian or Marxist or Evolutionary Psychology story about some unconscious motivation for me believing that X is true. But my motivation is strictly speaking irrelevant to the evidence I present. The evidence is good or bad evidence and should be judged on its own merits. This is a truism that all first year philosophy students are aware of. Yet this truism is constantly flouted when people have disagreements particularly in areas to do with human nature.

I myself have been just of guilty of questioning the motives of those I disagree with, and engaging in ad-hominem attacks. But I think that this approach is seriously misguided. Focusing on the evidence, is what is important, not ad-hominem attacks on those who disagree with one. You may not always be able to convince others that you are correct, and may learn that despite what you thought the evidence doesn’t support your position (this should be a good thing). But if you use a person’s supposed questionable motives as an excuse to ignore data you are no longer playing the game of science.

Androids, Persons, Electric Sheep and Empathy

I have just finished reading Philip K Dick’s 1968 classic: Do Androids Dream of Electric Sheep? Prior to this I have never read any of Dick’s work, though I have seen three films based on his work: ‘Total Recall’, ‘Minority Report’ and ‘The Adjustment Bureau’. I wasn’t a big fan of any of the films; all of them raised interesting philosophical issues but somehow left me cold. So I wasn’t expecting much when I picked up ‘Do Androids Dream of Electric Sheep? (Henceforth referred to as Androids). ‘Androids’ also had a film based on it; the famous ‘Blade Runner’ starring Harrison Ford. But I have not seen the film so really went into book in the dark and expecting nothing. I also have an unread copy of ‘Philip K Dick and Philosophy’ a popular culture series designed to explore the philosophical implications of various cultural Icons; in this case Philip K Dick. A former professor of mine Peter Simons wrote an essay for the collection. However, I want to read a few of Dick’s books before reading any of the essays in that particular collection; with that in mind I started with ‘Androids’.

‘Androids’ is set in post apocalyptic earth, where the aftermath of a nuclear war has resulted in a situation where virtually the entire non-human population has died out. The remaining animals that have not died out are seen as status symbols which people desperately want to obtain. Having a live animal is considered as sign of empathy. And having empathy is supposedly what separates humans from androids. Those who cannot afford a live animal will buy an electric animal that is hard to distinguish from a real one. These animals even require a certain amount of care from their owners.  However, the owners of these animals are ashamed of not having a real animal and hide it from their neighbours.

Because of the polluted state of the earth most people are seeking to immigrate from Earth to the colonies in Mars. Immigration is easy to achieve as long as one has not been designated a special. A special is a person who due to radiation poison is has an IQ below a certain point, typically as a result of the poisoning the person’s IQ will continue to deteriorate over time. The specials are known by their peers by the derogatory term ‘Chicken-heads’.

Rick Deckard is the main protagonist of the book he is a bounty hunter who is charged with ‘Retiring’ (killing) Androids who are illegally on earth. Rick’s previous animal died so he has to make do with an electronic sheep. This fact really bothers Rick and part of his motivation to kill as many androids as possible is the 1000 dollars he receives every time he retires an Android. One of the difficulties that Rick has in ‘retiring’ the Androids is that the new ‘Nexus-6’ type Androids are virtually indistinguishable from humans. The only way to tell them apart is to administer a Voight-Kampff test. The Voight-Kampff test is designed to test the capacity of the person/android being interviewed for empathy. The test measures physiological reactions to various stimuli involving sex, animals, murder etc.  One of the difficulties of the test is that the schizoid people who suffer from flatness of affect will fail the test and be confused for an android and killed. So Rick has to be very careful that the test is administered correctly.

The novel culminates in Rick chasing down three androids that he intends to retire which are housed by a ‘special’ John Isodore. The novel is enjoyable and raises real philosophical issues. It doesn’t reach the heights of a Tolstoy or a Dostoevsky but it does the job of any good novel; as well as being gripping and entertaining, it invites the reader to think.

The novel really makes one think of what it means to be a human. In the world of the novel the key criterion is empathy. Androids while surpassing the humans in terms of general intelligence are incapable of feeling true empathy. Rick muses that evolving in packs over millions of years is what made us emphatic towards our fellow creatures, while Androids created in a factory do not have this essential feature. Very early in the novel one is faced with a contradiction. Androids are not viewed as people because they fail the empathy test; yet Rick and the establishment worry about ‘retiring’ schizoid humans who fail the test. Yet if the criterion for being a person is passing the test then surely schizoid people who fail the test should be deemed non-persons as-well.  This however is not the case. The reason being that the empathy test is a pretext for getting rid of creatures that are not human[1]; creatures not built using the same material as all other humans. The establishment doesn’t want a human to be killed; because they are human, this is independent of whether they pass the test. Furthermore one gets the impression that if the android passed the empathy test but was shown (through the bone marrow test) to be an android Rick would still be licensed to kill him.

Using empathy as a criterion of what it is to be a person is strange. The specials are human beings who through no fault of their own have suffered a form of brain damage. They are shunned by other humans, denied the opportunity to leave the polluted earth, and generally treated with contempt. When John Isodore a ‘special’ helps the three fugitive Androids, they treat him with utter contempt, as sub-human. However this situation is mirrored when John’s boss refers to him as a chicken head and treats him with utter contempt. The lack of empathy here is spread out evenly between the humans and the androids (though it could be argued that at least a human is capable of empathy while the android is not).

Human empathy is indeed a strange and variable thing. It is biologically determined to a degree and shared with other species (again to a degree), though it is massively culturally variable. Moral Philosopher Peter Singer; has famously described our capacity to increase those we view as members of our moral community as our expanding circle of morality. He notes how there was a time when people viewed those who were outside of a particular tribe as sub-human. There was a time when people viewed members of a different race as being sub-human. Thus black people were viewed by Europeans as being outside the circle of empathy. Through moral philosophy using reason to show that those outside the circle were just as much people as those inside it, and novels painting vivid pictures of the rich inner life of those traditionally deemed outside the moral circle, our circle expanded to include the mentally ill, people of other tribes, people of other races and people with different sexual orientation into the circle.

Peter Singer has argued that we now need to expand the circle to include animals as non-human persons. Typically people argue that the key reason that animals should not be included within the circle is because they cannot reason to the degree we can. Singer has argued that if we take reasoning ability as the key reason to include creatures within the circle then we are faced with a conundrum. People with profound intellectual disability have in general got much poorer reasoning abilities than a lot of non-human animals. So those who make reasoning ability the criterion will be forced to exclude humans with profound intellectual disabilities from the circle. It is always open for a person to say that being a human biologically defined interms of DNA is the key criterion to be included within the circle. However this criterion seems no less arbitrary than using Skin Colour or Tribe membership. Furthermore some people with severe intellectual disability have DNA damage so DNA criterion is problematic in that respect. Eva Feder Kittay a moral philosopher whose daughter has a severe intellectual disability has challenged Singer for using people with intellectual disabilities as a tool to further animal rights. The debate between them is respectful as they both recognise that each is trying to minimise suffering. But the issue is emotional and keys in on our deepest held moral intuitions.

Discussions between people like Kittay and Singer are subtle and deep; a far cry from the casual test for personhood used in ‘Androids’. Even outside of moral philosophy our everyday empathy varies from situation to situation. Sociologist Roger Yeats has written on the strange fact that in western society we are socialised as animal lovers who casually exploit animals and eat them. Facebook is chocked full of people (including me) who fill their wall with videos of cute animals. I recently posted a video of a cute baby pig. I love pigs they remind me to some degree of dogs with their intelligence and loyalty. I abhor animal cruelty. But yet I eat meat. I eat pork. So my empathy only goes so far; though reflection on animal life makes me less and less comfortable with my choosing to eat meat.

When Rick watches another bounty hunter (Phil Resch) kill an android opera singer he admires he realises he has some empathy for the android. This empathy is increased when he has sex with the android Rachel Rosen. Resch himself worries that he may be an android at one point in the novel. The boundaries blur as to who is an android and who is a human. This forces the reader to think through the issues for themselves. Just when one is thinking that maybe the Androids are misunderstood one sees an android torture an almost extinct species of spider for her own amusement, and we wonder if all androids are capable of this kind of cruelty. We know some humans are; but not all of them. So the novel constantly gets us to evaluate our moral stance as we read it.

In his most recent book ‘Intuition Pumps’ Dennett talks about the importance of intuition pumps (thought experiments) for helping us think. He reminds us that they are vital tools for philosophers to use when mulling over a problem. A lot of science fiction tales are basically long entertaining thought experiments to help us think through certain possibilities. However, Dennett correctly notes that some thought experiments result in us being directed to think in a particular direction and ignore alternatives. John Searle’s thought experiment is an example of a thought experiment that can lead us to uncritically think in certain directions if we are not careful.

I think some science fiction has this possibility inherent in it. A lot of science fiction describes mechanical intelligence as unfeeling, uncaring, calculating machines: thus we have Data, Hal, The Terminator etc. I think we need to re-think this intuition, a lot of current AI centres on social interaction, embodied cognition etc. There is no reason a-priori to think that Androids need to lack empathy (ignoring for a minute the hard problem of consciousness which effects humans just as much as androids).

Overall, I think philosophers could benefit from reading more science fiction as a way of expanding their imagination, as long as they are aware of the possibility that they may be inadvertently having certain unhealthy intuitions pumped. Dick’s ‘Do Android’s Dream of Electric Sheep’, is an excellent thought experiment, but one that only works if it is used as a tool for people to switch key variables in when thinking about it. Discussion between philosophers of science fiction can only help to stretch the imagination of the philosopher and help them think outside of the narrow academic culture they have been taught how to think and reason in.

[1] Animals are not human but having lower IQ’s are no threat to the establishment and hence are viewed as valuable in the world of ‘Androids’.

Boeckx, Research Programmes and Reality

Boeckx: Linguistic Minimalism

 “As Chomsky (1959) convincingly argued, no ‘blank slate’ theory relying solely on external input can account for the creative aspect of language use. Native speakers of any language are able to effortlessly produce and understand sentences in the language that they have never heard or produced before. Chomsky’s rejection of any behaviourist account helped shape what came to be known as the ‘cognitive revolution’ – a mentalistic framework in which inborn (‘innate’) mechanisms played a central role in the acquisition and use of behaviour” (ibid p.17)

Cedric Boeckx in his (2006) ‘Linguistic Minimalism’ begins in the standard way of virtually all books on Generative Grammar with a reminder of the weakness of behaviourism. Boeckx then goes on to cite Chomsky’s ‘Review of Verbal Behaviour’ as the birth of cognitive science by refuting virtually every aspect of Skinner’s project in ‘Verbal Behaviour’. In his (2010) ‘Language in Cognition’ Boeckx goes through all of Chomsky’s main arguments against Skinner and then wonders how anyone could ever accept such an absurd world view. Boeckx makes all the same misinterpretations of behaviourism that are standard in the Generative Grammar literature. I have argued at length against these misinterpretations in a series of blogs (1) Poverty of Stimulus arguments and Behaviourism, (2) Some Behavioural Techniques and The Idea of a Blank Slate, (3) Pecs, Verbal Behaviour, and Universal Grammar. I won’t repeat the material here, interested readers can read the blogs if they want. I just want to make one point in his ‘Language and Cognition’ Boeckx claims that Skinner’s ‘Verbal Behaviour’ was a collection of the best behavioural evidence on the nature of verbal behaviour. That statement is simply false. ‘Verbal Behaviour’ unlike Skinner’s other books is not chocked with experiments, rather it is was meant as a programme for possible research into Verbal Behaviour. Boeckx’s description of the book is a clear indication that he never bothered to read the book (I wonder how many generative grammarians have). Skinner’s ‘Verbal Behaviour’ was meant as a research programme to guide further research. Those who haven’t read the book should at the very least read Kenneth Mac Corquodale’s reply to Chomsky which demonstrates the programmatic nature of the book.

It is also a fact that Skinner’s proposed research programme into Verbal Behaviour has lead to a lot of empirical research.  Michael (1982, 1988), Hall and Sundberg (1987) (Caroll and Hesse (1987), Yamamoto and Mochizuki (1988) all used a behaviour chain procedure to teach mand’s to children with intellectual disabilities. Rogers, Warren and Warren (1980) studied manding without using the chain procedure instead they got the children to play with preferred objects and asked the children to mand for the ones they wanted. Simic and Butcher (1980) used two different kinds of foods and trained the subject to say I want a when the analyst entered the room with a tray of food. Savage-Rumbaugh (1984) and Sundeberg (1985) trained non-human subjects to mand[1].

Sautter and LeBlanc’s  (2007) paper showed that between 1992 and 2007 the majority of Verbal Behaviour research focused on two areas (1) Mands (2) Tacts. Furthermore the majority of research in applied verbal behaviour has been with people with intellectual disabilities and/or autism. So Dixon et al, argue that more research needs to be done on people who are developmentally typical, while more research also needs to be done on more complex forms of language.

Their results showed that of the 99 articles they analysed 77percent were done atypical Members of the population. Of that number, 63 of the articles focused on children and 23 used adults.  Only 27 percent of the articles investigated typically developing members of the population, and 19 of those examined children and 10 were with just adults. Only four studies (4%) examined both AP and TP in one article. (ibid p. 202).

They conclude that the vast majority of research in this area has been with children with developmental disabilities and/or autism. And while this is important and welcomed data the scope of the research needs to be widened to include a much bigger section of typically developing people if Skinner’s ‘Verbal Behaviour’ is to be championed as an adequate theory of language acquisition. Furthermore they correctly note that we need to go beyond Mands and tact’s and do experimental and empirical research into things like Autoclitics. They note that with recording devices and mountains of internet conversations taking place we are swimming in data and have the technology to record it and analyse the functions of various speech patterns. So if Verbal Behaviour research needs to takes these limitations in the research done and overcome them if the research programme is to remain alive. But they are quiet clear that if it turns out that Autoclitics etc cannot be acquired in the way Skinner specifies this refutes Skinner’s Verbal Behaviour project.

Having worked with PECS a behavioural technique inspired by Skinner’s Verbal Behaviour I would predict that further empirical research into more complex forms of verbal behaviour will indeed refute the claims made by Skinner. While I am not impressed by Chomsky’s arguments against Skinner (which are nothing more than a caricature), I am not a behaviourist, and don’t think we can do without intentional locutions in our explanations of language acquisition. But Dixon et al’s empirical research into verbal behaviour and argument that future experiments will determine the success and failure of the programme stand in stark contrast to Boeckx’s claim for the minimalist programme:

“We are still far from having a fully-fledged minimalist theory of language. This fact has important repercussions for what it means to do research in the minimalist programme, A program is open-ended, it may take a long time to mature, it allows researchers to make maximal use of their creativity as they try to move from minimalist guidelines to concrete principles, it makes room for multiple, not necessarily mutually consistent or compatible perspectives, and cannot be evaluated in terms of true or false, but in terms of fecund or sterile.” (Boeckx ‘Linguistic Minimalism’ p. 6)

 

He justifies this approach by appeal to the philosopher of science Lakatos. I agree that an entire research programme is not refuted when a prediction is shown to be false. And it is not falsified by a theoretical claim being refuted experimentally. It is always open to the researchers to modify their programme in light of falsified predictions. However, as more and more counter evidence mounts against a particular research programme, its adherents should suspect that the programme is false. Boeckx et al have no problem listing a series of facts that show the behaviourist research programme is a dead end with a series of refuted claims. I think they should hold themselves to the same standard. I think it is frankly a bit bizarre to start a research programme with claims that Chomsky has effectively refuted a behaviourist research programme, and a few pages later to assert that Chomsky’s research programme cannot be refuted.

Boeckx has no problem with accepting that particular claims can be refute in science he just doesn’t seem to think that a mounting amount of refutation should be taken as a sign that a research programme has been refuted. I am not so sure. I think that if a workable poverty of stimulus argument was constructed this would refute rival programmes which relied on language being learned by domain general procedures. Unfortunately I have never seen a workable poverty of stimulus argument.

Interestingly Boeckx does discuss a poverty of stimulus argument early in his ‘Linguistic Minimalism’ book. One wonders why he does this, is it because he think he think it refutes rival research programmes? If so this attitude is wildly at odds with his attitude to The Minimalist Programme which we are told cannot be falsified.

Boeckx discusses the famous structure dependence poverty of stimulus argument. This is the standard case used by Nativists for Poverty of Stimulus Arguments. Boeckx (to his credit) does actually address the evidence that Sampson, Pullum and Scholz have put forth on the issue. He replies by making the correct point that the issue isn’t whether a child is presented with examples in their PLD which are evidence for the structure dependence rule, but it is whether there is enough evidence in the child’s PLD for him to learn the correct rule. Boeckx is correct that the question of whether the child is presented with the data in their PLD is not the primary issue. Nonetheless since Chomsky did make the following claim:

“The child could not generally determine by passive observation whether one or the other hypothesis is true, because cases of this kind rarely arise; you can easily live your whole life without ever producing a relevant example…you can go over a vast amount of data of experience without ever finding such a case… ( Chomsky 1980, 121)”

This claim has uncontroversially been refuted by the data of Pullum, and Sampson. Nonetheless, Boeckx is correct to note that the important issue is whether the child can learn from the data they experience. He appeals to Legate and Yang’s (2002) attempt to quantify how many examples need to be in the child’s PLD in order for a child to be able to learn the relevant construction.

Legate and Yang (2002) argued that if we compared the evidence a child has in his PLD for constructions that we do know they learn, with the evidence the child has for the structure dependence rule we have a good yard stick to determine whether the child has enough evidence to learn the rule for structure dependence.  Legate and Yang are to be applauded for trying to move the argument on and specify further ways of testing the issue. However as Clark and Lappin (2011) correctly note tests such as Legate and Yang’s really only make sense in terms of specified learning theory and Yang does not provide one.

Legate and Yang’s evidence of a rule that is learned is the use of null subjects in child language. The child reaches adult level at about 3 years of age. Boeckx glosses Yang’s argument as follows:

“The core examples which inform children that all English (finite) sentences require phonologically overt subjects are sentences involving pleonastic subjects (e.g. there is a man here). Such sentences amount to 1.2 per cent of the potential PLD (all sentences). Legate and Yang suggest, quiet reasonably, that the PLD relevant to fixing the Y/N question should be of roughly comparable proportion” (Boeckx 2006 p.25)

Boeckx then notes that a search of the CHILDES database reveal’s that sentences relevant to learning the auxiliary inversion rule are available to the child 0.045 and 0.068 percent of the sentences they experience. And he concludes this far is too little data for the child to learn the rule from.

Legete and Yang  (2002) even go on to argue that out of the 67,000 sentences observed in the CHILDES none of the adult interactions use sentences like (9):

Not only are those frequencies far below the magic figure of 1.2 percent required to learn the correct rule by the 36th month, it is also low enough to be considered negligible, that is, not reliably available for every human child. And interestingly, the canonical type of critical evidence, [aux [ NP … aux …] e …], appears not even once in all 66,871 adult sentences found in both the Nina and Adam corpora ñ the standard statements of the APS are not hyperbole as P&S charged. Hence the original APS stands unchallenged: the knowledge of structure dependence in syntax, as far as we can test quantitatively and comparatively, is available to children in the absence of experience. 8 And the conclusion then seems to be Chomskyís (1975: 33): ìthe childís mind … contains the instruction: Construct a structure-dependent rule, ignoring all structure independent rules. The principle of structure-dependence is not learned, but forms part of the conditions for language learning. (Legete and Yang (2002) ‘Empirical Re-Assessments of Poverty of Stimulus Arguments pp 158-159)

As I said above Legete and Yang (and Boeckx) should be applauded for their attempt to deal with Pullum et al’s data; however I think their negative conclusion is unwarranted. In their (2011) ‘Linguistic Nativism and Poverty of Stimulus Argument’ Clark and Lappin used learnability models to tackle the claims of Legete and Yang.  They noted that assessing whether such constructions can be learned by experience will require mathematical models of how learning from such few constructions is possible. Such programmes have been developed already. So, for example, Clark and Eyraud (2007), Perfors et al (2006), Reali have all developed programmes which can learn from less data than discovered by Pullum, Scholz and Sampson. Clark and Lappin:

 “In subsequent sections we consider work in computational learning theory applied to grammar in order to clarify the question of what data is needed for learning the principles governing polar interrogatives and related syntactic phenomena. The first paper (Clark and Eyraud, 2007) which we discuss in more detail in Chapter 8, shows that a very simple grammar induction algorithm based on distributional patterns acquires rule from a small data set that does not include examples like 8b (Is the student who is in the garden hungry?). The second paper (Perfors et al., 2006) indicates that learners can infer hierarchical structure in a language on the basis of a simple domain-general learning prior.

Both of these papers adopt the same general perspective. They grant the absence in the PLD of examples that effectively distinguish between correct and spurious rules for polar question formation. They also reject a transformational account of the relation between declarative and interrogative forms, relying instead on a context free grammar. They show that the correct interrogative form can be learned without seeing any examples in what is purportedly the set indispensible data” (Clark and Lappin ibid p.40)

A few points need to be noted here. Firstly Clark and Lappin correctly note that just because these programmes can learn from the relevant data doesn’t mean that the brain  uses the same proceedures to acquire a language. But it does mean that the Poverty of Stimulus argument as presented by Yang has been refuted. Secondly Clark and Lappin along with Yang assume that the rule is actually a rule of language. This assumption may not be warranted. Since analysis of the CHILDES corpus (actual speech and language) does not contain examples of the relevant rule then why exists? Geoffrey Sampson correctly notes that the rule is a rule of written language (which explains people’s intuitions of grammaticality), but since people don’t actually speak in ways that conform to the rule then why presume it is a rule governing how people form questions when they actually speak?

Boeckx thinks incorrectly that Legate and Yang have clinched the case for the poverty of stimulus argument. He then goes on to argue that independent of the Legate and Yang evidence opponents of the Poverty of Stimulus arguments are on even weaker grounds than they think. To support this claim he notes the following standard claims of Generative Grammarians. (1) People don’t try out false constructions like ‘Is Mary will believe that Frank is here?’* or ‘Is the man who tall will leave now?*. He cites Nakayama and Crain’s 1987 experiment as evidence people don’t try out sentences like the previous ones. He then makes the claim that even is children did try out ungrammatical constructions like the proceeding ones, since people are not-typically corrected for ungrammatical constructions, and even if they were they don’t make use of corrections, there is no way children could learn even through positive or negative data the relevant rules.

A couple of points need to be made here. Firstly Crain and Nakayama (1987), is widely cited in the literature as proof people don’t try out the relevant false constructions. But Sampson (2005) has noted that there is a real problem with interpreting this experiment. Since the corpus data indicates that people don’t form interrogatives according to the same rules of written language (the rules postulated by Chomsky et al), then it is strange that children in Crain and Nakayama’s experiment form questions in a way that never occurs in actual speech[2]. This fact indicates that there may be some element of accidental priming in the experiment. At the very least the experiment needs to be replicated before been used as conclusive proof on the issue.

Also Choinard and Clark (2002) presents experimental evidence that children are corrected for making grammatical mistakes and do make use of these corrections. At this point we have a lot of conflicting experimental data on how much corrections children make use of when are implicitly corrected. So Boeckx is wrong to present things as though the poverty of stimulus argument is proven. The fact is the argument relies on a series of unproven and disproven claims. Furthermore, other generative grammarians like John Collins have argued that the poverty of stimulus argument as presented by Pullum, Legate and Yang is not the real poverty of stimulus argument. I have dealt with Collins’s version of the argument in my earlier blog (as well as a more recent APS constructed Berwick, Chomsky et al 2011) in my ‘Poverty of Stimulus Arguments and Behaviourism’ I won’t repeat the material here.  My point is merely that as far as I can see the example of a poverty of stimulus argument Boeckx brings up does not work. However if he can construct a workable poverty of stimulus argument he has refuted his opponent, and I think he should accept the converse. Minimalism should be adopt the exact same approach.

When Boeckx notes the following:

“Minimalists endorse the belief (held by all major proponents of modern science, from Kepler to Einstein), that nature is a realisation of the simplest mathematical ideas, and a theory should be more highly valued if it gives us the sense that nothing could be changed…a sense of uniqueness,…a sense that when we understand the final answer, we will see that it could not have been any other way (Weinberg 2001).” (Boeckx p. 9)

He is pointing out that adopting similar approach to studying nature as minimalists has been very successful for physicists. Presumably he is implying that it is likely that a similar approach will yield similar success in linguistics. Maybe so; however I doubt whether the laws of physics will have similar correlates when studying the structures of creatures built by the tinkering process of natural selection. Either way the nature of cognitive structures of humans will ultimately be decided empirically. Minimalists cannot avoid this truism; if reality consistently conflicts with their beliefs then their beliefs must change.

[2] I should say that they never form questions in the manner which Chomsky et al predicate they should in the limited corpus analysis of speech interaction that has been done so far. Much more research needs to be done to say for sure.

Williams Syndrome and the language Instinct

Williams Syndrome is a rare developmental disorder that affects 1 in 20,000 people. Its genetic origin has been discovered it emerges as a result of a deletion of some 17 contiguous genes on chromosome 7q 11.23 (Gerrans: 2003). Despite working with people with intellectual disabilities of various different kinds I have only ever worked with one child with Williams Syndrome. It is not the place here to discuss the child. I will merely note that the child was friendly, fun and did indeed seem from an intuitive perspective to have a verbal IQ which far outstripped his general IQ. However, as we all know a person’s subjective impressions are not a sound basis for a scientific analysis. I merely mention my subjective impressions to indicate that I can see why Nativists were led to arguing that syntax was spared in people with Williams Syndrome. This is the subjective impression that people with Williams Syndrome give, and a lot of early tests indicated that language, in particular syntax was indeed spared in people with Williams Syndrome.

In his excellent 1994 book ‘The Language Instinct’, Steven Pinker mounted a series of arguments for his theory that people have an innate domain specific language faculty. He gave evidence from a variety of different quarters, poverty of stimulus arguments, the speed a child and apparent effortlessness that children learn their first language, the fact that children can allegedly learn language without any correction for ungrammatical mistakes etc. I have gone through all of his arguments in other blogs and have been very critical of them. I will not repeat the material here. In this paper I just want to discuss his claim that an apparent double dissociation occurs with Specific Language Impairment (SLI) and Williams Syndrome and that this is evidence for an innate domain specific language faculty.

In a nut shell Pinker (1994) argued that in SLI people have unimpaired general IQ but they constantly make specific grammatical mistakes. While in people with Williams Syndrome he argued the opposite appears to be the case. With Williams Syndrome people have a low general IQ (between 50 and 70) but have unimpaired syntax. Pinker used these two disorders as evidence that language and general intelligence were separate faculties; and in particular that there was indeed an innate domain specific language faculty.

The evidence for SLI has become increasingly under pressure in the 20 years since Pinker wrote ‘The Language Instinct’. People with SLI typically have damage to the FOXP2 gene, this fact lead people like Pinker to argue that FOXP2 was actually a gene for language.  It turns out that the FOXP2 gene which the press dubbed the language gene is shared by a number of other animals, so it isn’t specific to just humans and hence isn’t solely responsible for creating a language faculty. Experimental research into FOXP2 in other animals reveals that the gene plays a big role in motor control. It also turns out that a lot of the supposedly syntactic deficits in people with SLI are actually deficits in speech based on poor motor control. Furthermore despite Pinker’s assertions; people with SLI do not typically have a normal IQ, in fact they show a lot of non-linguistic cognitive problems, and have a general IQ of between 75 and 91 (See Plomin and Kovas ‘Generalist Genes and Learning Disabilities’ 2005). Overall despite the initial hype there is little reason to think that SLI offers any convincing evidence for a disassociation between language and general intelligence.

Pinker didn’t just speak of dissociation between language and intelligence; he spoke of a double dissociation. The case of SLI doesn’t stand up to critical scrutiny, however Williams Syndrome fares a bit better. The relatively spared syntactic abilities of people with Williams Syndrome, is good evidence for the existence of a language faculty, however, the data on the disorder is not as straight forward as it may seem.

Nativists typically argue that in Williams Syndrome, language (in particular syntax) is spared despite the fact that general intelligence is badly affected. However, in Grant et al 2000 ‘A study of Relative Clauses in Williams Syndrome’ they found that in a simple elicited imitation task examining the syntax of relative clauses; older children and adults with WS only reach the level of a typical five year old. This is an important result and is badly at odds with claims that syntax is spared in people with WS. They note that despite claims to the contrary the non-verbal skills of WS children who do well on language tests are not so low as to suggest idiot savants either.  Furthermore they argue that each time an empirical study is done on people with who are clinically and genetically diagnosed the language turns out to be delayed or deviant (Grant et.al. p404) They conclude that syntactic structure poses some difficulty for people with WS. People with WS are seriously delayed in syntactic development, even into adulthood. Older children and Adults with WS are significantly impaired in their ability to correctly repeat relative clause sentences. The data suggests that WS people are dependent on overt markers in order to process multi-clause sentences.

Now Grant et al is an interesting paper which clearly shows that claiming that language and syntax are spared in people with WS while general IQ is extremely low is a bit of an exaggeration. Syntax is not entirely spared as can be seen with the difficulties children with WS have with imitating various kinds of relative clauses. While as children with WS that are more proficient with language typically have a higher IQ than less verbal WS suffers. Nonetheless, Grant et al. still indicates that syntax, while not entirely spared, is surprisingly good given the typical general intelligence of WS suffers. So the research could at a stretch be used to support a Nativist position, though this would not be a straight forward matter and a lot more work would need to be done.

The fact that syntax is not entirely spared in people with WS adds shows that the data is more complex than some Nativists assume. Another complexity about WS is that children with it actually acquire language in a different way than normally developing children.  Laing et al. (2002) ‘Atypical development of language and social communication in toddlers with Williams Syndrome’ showed that people with WS are impaired in the triadic interaction that is necessary for the referential uses of language. Their study further showed that WS children don’t show the typical correlations between socio-interactive markers and language seen in typical controls.  In WS toddlers speech preceded pointing where as it followed pointing in the typically developing and DS toddlers. This fact suggests that despite acquiring a big vocabulary people with WS acquire language in a different way than typically developing people.  In previous studies Mervis et al (1999) showed that in a free play situation, toddlers with WS did not spontaneously use the pointing gesture. Other studies indicated that children with WS did not engage in co-ordinated joint attention until well after the vocabulary spurt. Laing et al wanted to further test these claims and to test whether problems with motor control was behind the fact that children with WS typically didn’t engage in pointing. They discovered that the WS group produced much less pointing behaviour than the control group… and the data shows the WS group are less interested in objects as the control group. It was noteworthy that the WS group were more social than the control group but the control group produced more triadic eye contact (Laing et al p. 237). WS people are deficient in production and comprehension of pointing. Their motor tests showed that the lack of pointing could not be attributed to poor motor control. Despite the lack of triadic eye contact, and pointing do children with WS do use words referentially.  Over all they take their studies to show that WS language acquisition follows a different developmental trajectory than normal language acquisition.

The fact that children with Williams Syndrome seem to acquire language in a different way than typically developing children is important, but not necessarily inconsistent with the Nativism hypothesis. Nativists[1] typically argue (wrongly as we have seen) that syntax is spared for people with WS. The fact that children with WS have difficulty both producing and interpreting pointing is not necessarily relevant to the issue of an innate domain specific language faculty. Chomsky, for example, restricts the ‘faculty of language narrow’ (FLN) to pure syntax (the operation of merge), and he would argue that pointing and learning the meaning of words are operations of the sensory motor system and the conceptual intentional system so have little to do with the faculty of language narrow which what is spared with WS. Theorists like Pinker who don’t divide up “Language Faculty” in Chomsky’s way, can still argue that it is syntax that Nativists argue is spared so the fact that children develop the ability to use language pragmatically differently is not that relevant.

However I think a closer look at the developmental phases children go through when acquiring language is vital. We know that their syntax is different, their semantics is different, and they acquire their language differently than typically developing subjects. It seems obvious to me that before we start talking about spared language abilities in WS subjects we need to do a lot more research. A lot of research has focused on experiments done on about 10 subjects on average. I think that more corpus analysis of the actual speech patterns of people with WS is needed and more longitudinal studies are necessary as well. We have more and more sophisticated recording data available than ever before. Using this data (assuming permission from parents of subjects and subjects themselves), we can record and analyse it; this is surely vital, and will vastly increase our knowledge. Another vital area that needs to be researched is the effects of various forms of therapy on people with WS (ABA, Speech and Language Therapy etc) in improving their linguistic skills and their general reasoning abilities. Until more research of this kind is done I think claims of spared language abilities in people with WS is irresponsible.

Martens et al (2008) ‘Research Review: Williams syndrome: a critical review of the cognitive, behavioural, and neuroanatomical phenotype’ analysed the recent studies into WS. In this meta-analysis they note that while it was originally believed that language (in particular syntax) was relatively spared in people with WS more recent studies have called this belief into question. People with Williams Syndrome use fewer gesturing skills than typically developing people. They have a vocabulary size well below the average for their age. They display atypicalities in their substitution of articles and prepositions. They also show evidence of typical but delayed, as well as atypical syntax, morphology and vocabulary (Martens et al p. 581) some recent research has revealed atypical performance in grammatical comprehension, gender agreement, morphosyntax, pragmatics, oral fluency, and semantic fluency.  They make the following important point:

“Variable findings across grammatical studies most likely reflect methodological issues such as differing sample sizes and the use of various language tests and comparison groups, as well as inconsistencies in language skills between and within WS individuals.  (Stojanovik et al. 2001 p. 37) stated that ‘any attempt to generalize from group studies should be approached with extreme caution.’ Despite this, generalisations frequently occur in secondary sources (Grant et al 2002) and findings of RELATIVE strengths evolve into claims of SPARED language skills in WS. (See Pinker, 1994 1999)”

Martens et al. (2008) ‘Research Review: Williams Syndrome: A critical review of the cognitive, behavioural, and neuroanatomical phenotype’.

I agree entirely with the above point; it is not that Nativists have been conclusively refuted on the WS issue; rather it is that the data indicates that we simply don’t know enough to decide either way on the issue. As the old cliché goes; more research is needed, as well as less extravagant claims by popular writers like Pinker. There is indeed good evidence that syntax is surprisingly good in WS subjects but there is also growing evidence that it is not entirely spared. In her ‘What’s Within’ Fiona Cowie notes that Rubba (1991) found that 19 percent of WS subject’s uses of prepositions was deviant. Clark and Lappin (2011) quote Brock (who did very large a survey of recent studies into the language of people with WS) Brock concluded “There is currently little evidence for selective preservation of linguistic skills in Williams Syndrome”. Furthermore people with WS typically have the same mental age as 5 year old children and 5 year old children have relatively fluent linguistic abilities.

I think the facts show that Nativists have not yet made their case. Nonetheless given how little we know at this stage more research is what is needed not dogmatic claims which stretch beyond the known evidence.

In this blog because of time constraints I have not described and analysed the experimental techniques used in the papers I mentioned. Furthermore I have not used enough data from meta-analysis to indicate why I think that future research needs to take the road I advocate. In my next blog on Williams Syndrome I will deal with these topics.

[1] I am being a bit vague in my use of Nativist. Obviously there are many different Nativist theories in the Generative Grammar literature. In the space limitations imposed by a blog I cannot go into the all of the different Nativist theories. I may in the future examine which different theories of Nativism are more consistent with the WS data.