Monthly Archives: February 2015

Concepts: A reply to Jim Hamlyn and John Kubie

Neuroscientist John Kubie criticised a recent claim I made in my Blog-Post ‘What is a Concept and What Creatures Have them’. He argued as follows:

 “Your second sentence “A concept is to some degree a theoretical term that we use to explain, predict and control the behaviour of various different creatures” seems wrong to me. For me the core sense of “concept” has nothing to do with communication or control of other creatures. It has to do with aggregation of diverse stuff into a single “symbolic?” Entity. It’s a logical operation.”

I think what I wrote in this instance was a bit ambiguous so it may need some clarification. I was speaking about the evidence we use to justify our attribution of a concept to another creature and noting the importance of behavioural evidence in such an attribution. So, for example, consider the paradigm where researchers try to discover what conceptual abilities pre-linguistic children have through monitoring their surprise reaction. A child is typically habituated to a particular phenomena, e.g. a ball passing behind a screen and coming out the other side. However, in experimental settings after the child is habituated to the particular phenomena they are presented with strange behaviour of the ball. So one ball may pass through a screen and two balls will come out the other side. Or a ball may go behind the screen and not come out the other side. If a child looks surprised at these events we will assume that they have some expectations of how objects should behave and are surprised that the objects do not typically behave in this way. Now Piaget and Quine both criticised this experimental procedure claiming that it merely shows that children have expectations of continuity in their environment not that they have concepts of Object, Number etc. And in recent times Carey, Soja have made the experiments more sophisticated to avoid the objections of Piaget et al. Now my point here isn’t to evaluate the theories of Carey et al but it is merely to indicate that in these experimental settings Concepts are attributed to children because of the explanatory role they play in the experimental theories of theorists. Or outside of the lab people may attribute a concept a fly to a frog because of its behaviour around flies (in this instance they would probably be wrong). In these settings we use our attributions of concepts as part of an explanatory schema to explain what the creature will do, and will modify our attributions if the creatures behave in ways that we did not expect.

When John argued that:

 “The core sense of “concept” has nothing to do with communication or control of other creatures. It has to do with aggregation of diverse stuff into a single “symbolic?” Entity. It’s a logical operation”

I am confused as to what he meant in the above claim. Is he offering a theoretical definition of the term ‘Concept’ which he thinks will make our theorising easier?  Is he in a sense offering a convention as to how the word ‘Concept’ is to be used in scientific theorising? Or is he arguing that his definition corresponds with the ordinary language use of the word ‘Concept’? I would argue following the philosophical work of Wittgenstein and the scientific research of Lakoff and Rosch that there is no one accepted meaning of ‘concept’. Rather the word concept is a family resemblance term (though people may have a prototype of concept in mind then they use the term).  Either way I would need to know what John has in mind when he argues that a concept is “an aggregation of diverse stuffs into a single symbolic entity” before I can adequately reply to his comment.

In my blog ‘What is a Concept and Which Creatures Have Them’ I criticised philosophers who think that only language using creatures have concepts. I argued that these theorists are acting like Platonists about the nature of concepts and ignoring recent empirical research into the subject. Philosopher Jim Hamlyn has long argued that animals have no concepts unless they are language users and tool users.   So, for example, while Dolphins may be one of the most social and intelligent animals on the planet because they don’t use tools he would argue we are not justified in attributing conceptual abilities to them. In his recent very interesting blog ‘Tools in the Work Shop of Language’ Jim defended and clarified his position on animal concepts.

One thing very clear from the blog is that Jim adopts a guilty until proven innocent approach to attributing concepts to non-linguistic animals. He argues as follows:

“Secondly, the assumption that certain sophisticated behaviours can only be explained by concept-possession is only justified if every other explanatory alternative has been ruled out.”  
This seems a very strange criterion to offer as when a theorist is justified in attributing a concept to another creature. A theorist typically doesn’t say that a behaviour can only be explained by concept attribution; rather they would argue that attributing conceptual abilities to these creatures is the best way of explaining their behaviour.

By setting things up in this way Jim gives himself an easy task.  He argues that people who attribute concepts to non-human animals claim that ONLY conceptual explanations are possible, so If Jim can tell any story that doesn’t attribute conceptual abilities to non-human animals he has automatically refuted his opponent. Except of course he hasn’t refuted his rival theorist because the rival theorist claims that the conceptual explanation is the best explanation not that it is the only one.

Jim seems to start with the assumption that we should assume from the outset that favouring the conjecture that animals have no concepts is favourable to the view that they have concepts. Here is an example of this:

Experience leads to the development of expectations about the regularities of the universe and of unfolding events. I propose that we rule out all possible nonverbal explanations before we are tempted to ascribe capacities of conceptualisation to nonverbal creatures.” (Hamlyn ‘Tools in the Workshop of Language’)

By adopting this question begging approach he  shifts the burden of proof onto his opponents. This approach is though problematic, it is not always easy to decide whether view A or view B should be accepted as our null hypothesis. In her recent book Kristin Andrews spelled out the difficulties in deciding what our null hypothesis should be:

“But perhaps a greater problem arises at the point of deciding on the null hypothesis itself, because beginning an investigation of a property with a sceptical view may introduce against animals having that property. When our concerns are purely epistemic, as they presumably are in the case of animal cognition, it isn’t clear why either the sceptical or optimistic hypothesis should get preferential treatment from the outset” (Andrews: ‘The Animal Mind’ p. 42)

I think Andrews has a point; in debates about whether we should attribute various mental abilities to animals or not people do seem to operate with the null hypothesis that it is simpler to avoid attributing things like concepts to animals if we have an alternative story. Marion Stamp Dawkins in her ‘Why Animals Matter’ adopts this approach.  Andrews notes that one there are some good reasons to not adopt the null hypothesis in the way people like Dawkins does. She cites Eliot Sober (2012) and Frans de Waal (1991) have argued that given facts about convergent evolution, and continuity of life on earth the opposite assumption should be made. Sober’s idea is that if humans and closely related species (Apes) exhibit both exhibit behaviour X and humans produce behaviour X by occupying mental state m then we have evidence that M is the proximate mechanism that the chimpanzees employ in producing X (ibid p.43)[1]

I personally prefer to adopt a wait and see approach; I think Sober’s approach is ambiguous as to how far down the species we can go and assume mental continuity. While those who prefer to make the opposite assumption typically do so by arguing that their approach is simpler but they typically never specify precisely how they are judging simplicity. In my view if we have two theories one of which says that Dogs are concept users, and the other which says they are not, if we do not have sufficient evidence to decide the issue we should continue our search for the correct answer; not make question begging assumptions as to which approach we should assume is true by default.

As his blog-post goes on Jim argues that Jerry Fodor’s language of thought thesis is impeding progress into the study of animal mental capacities. This is news to me. Fodor’s influence in cognitive science is pretty much a whipping boy. He is one of the most cited theorists in the philosophy of mind but is typically only cited so people can say why they disagree with him. Researchers in to the conceptual abilities of pre-linguistic children such as Spelke, Carey, Soja, Markman etc have all been very critical of Fodor’s views on concepts. Susan Carey in her (2011) book ‘The Origin of Concepts, is very critical of Fodor’s views on Concepts. I rarely see his name mentioned in the animal ethnology literature I think to focus on Fodor’s views on animal conceptual abilities in the context of this discussion is radically wrongheaded given how most cognitive scientists disagree with him.

When dealing with proponents of a particular position I think it is best to focus on the best theorists holding a particular position. In defence of his position Jim mentions the arguments of Davidson as being very compelling against the position that animals have concepts. In an earlier blog evaluated Davidson’s arguments in detail and critiqued them all. I think Jim would not find it very helpful if I pretended that Descartes (who held that animals were unthinking machines) represented the best current evidence in favour of Jim’s position. Likewise I respectfully submit that Jerry Fodor doesn’t represent main stream cognitive science and if Jim wants to criticise representational views in Cognitive Science that refuting Jerry Fodor is not the way to go.

Jim ends his blog-post with an interesting discussion of what he takes to be a problem with attributing concepts to animals.  He notes correctly that an animal learning a few proper nouns does not mean that the animal can manipulate them and combine them in the way language users do. He argues that a creature who has concepts but who doesn’t have the skills to use them in productive ways has nothing more useful than a stone, or a stick that they don’t know how to use.

I think Jim’s emphasis on the analogy between concepts and tools is a good one and warrants further exploration. When we attribute concepts to animals we are arguing that they can use these concepts to think and decide what to do. When making this claim we need to be very particular about what we are claiming. Any claim than an animal can use concepts in some internal language of thought by combining them in certain ways needs to be very clear in the behavioural evidence for how these concepts can be combined and cannot be combined. This type of evidence is rarely if ever presented and until it is I think sceptics about animal concepts will have a point. In his (2012) paper ‘Why we cannot say that Animal’s Think’ Joseph Beck argued that we are not justified in assuming that animals have concepts because we have no theory of how they systematically use the concepts. Beck followed Evans (1982) in arguing that if we are to attribute conceptual abilities to a creature the creature must be capable of using these concepts systematically. Beck was responding to Susan Carey[2] work which attributed concepts of numbers to some animals. By analysing the experimental evidence of Pigeons behavioural responses he demonstrated that the Pigeons were incapable of using these concepts in a way that represented systematicity and generality constraints. So he concluded that we are unjustified in assuming that Pigeons are thinking using concepts. I think that this constraint is a good one to help refine research into the area (Andrew’s claims that Caruthers (2009) demonstrated that animals do satisfy a weak version of the generality constraint), those of us who claim that non-linguistic animals have concepts need to meet this criterion. So this leaves a lot of empirical leg room in the debate and I think Jim has a point when he argues that the ability to use a concept for various purposes (which requires some kind of systematicity), is the key area to be researched.

[1] Andrews got the quote from Sober (2012) pp 3-4

[2] Here I am following Kristin Andrew’s explication of Beck’s work in Andrews (2015)

Constructed Worlds, Forms of Life and Intellectual Disabilities

In his ‘Philosophical Investigations[1]’ Ludwig Wittgenstein discussed simple language games as a way of exploring the nature of our own language. One of the themes discussed early in the PI was ostensive definitions and how they could be used to teach that a word refers to number, shape, colour, a whole object: e.g. Slab, or whether a word is being used as a demonstrative; this and that. Wittgenstein’s discussion reveals a kind of poverty of stimulus in learning a word through ostension. So if a person points to something and says ‘Blickiri’ it is unclear purely by ostension whether the person is referring to a colour, a shape, an object, a number etc. Wittgenstein notes that we could limit possible interpretations of ‘Blickiri’ if it was prefixed by ‘Is a colour’, ‘Is a number’ etc. However, obviously if the person we are trying to teach by ostensive definitions already understands predicates of our language like ‘Is a colour’, ‘Is a number’ etc they would already understand complex aspects of our language and hence are not being taught from scratch by ostension.

On the picture sketched by the later Wittgenstein we learn our language through our shared form of social life with our peers. Throughout the PI Wittgenstein analyses how words get their meaning; and shows that supposed bearers of linguistic meaning such as private mental images, are not sufficient to the task of providing linguistic meaning. His emphasis on the social determination of meaning, shared practices, and actual linguistic use broke philosophy away from its traditional a priori methodologies. Instead of arguing that concepts MUST have a certain structure, or that we MUST be following certain determinate rules when reasoning, Wittgenstein asked us to look at our practices. This approach if adopted makes philosophy more pragmatic and interested in our actual interactions with the world as opposed to a purely speculative or armchair discipline.

Martin Heidegger’s dense prose and profound sounding claims about the nature of ‘Being’, ‘Language’ and ‘Concealment’, ‘Technology’ etc at a first glance seem to be far removed from the pragmatic spirit of the latter Wittgenstein’s philosophy[2] Furthermore Heidegger’s later philosophy is no more compatible with pragmatism than is the philosophy the early Wittgenstein espoused in his ‘Tractatus’. However, it has often been noted that aspects of ‘Being and Time’ with its focus on everyday life and our concernful engagement with the world, and the role of implicit awareness, and embodied engagement with the world is also subject to a pragmatic interpretation.

While Heidegger argued against a psychological interpretation of his work it has nonetheless been used a great deal in both psychology and psychoanalysis. Robert Stolorow and George Atwood are excellent scholars of Heidegger and psychoanalysts who have managed to combine Heidegger with Freud. Their work has greatly enriched some Freudian concepts and has helped in understanding the lived world of people suffering from psychosis. Clinical psychologist Louis Sass has also managed to combine the insights of the later Wittgenstein and Heidegger’s ‘Being and Time’ as a way of understanding the lived experience of people with Schizophrenia.

In this blog-post I want to discuss the social world that is partially constructed for people with severe intellectual disabilities. My aim is not provide a scholarly discussion of the philosophy of either Wittgenstein or Heidegger, rather I will use some of their concepts to help me elucidate the world of people with severe intellectual disabilities is partially socially constructed. In an earlier Blog-post ‘Radical Interpretation and Intellectual disability: The principle of Charity and The Typical Mind Fallacy’ I tried to explore some of the challenges of interpreting how people with various forms of intellectual disability experience the world. In that work I called for more descriptive work similar to the work Oliver Sacks has done on people with neurological disorders to help health care workers understand the subjective world of the people they care for. The emphasis in that blog post was on the subjective world of people with intellectual disabilities and the challenges of interpreting it. In a later blog-post ‘Some Behavioural Techniques and The Idea of a Blank Slate’ I briefly discussed some behavioural techniques that are useful for dealing with people with intellectual disabilities who engage in challenging behaviour. Here the focus was on objective features of the environment and behavioural techniques that reduce the probability of a particular behaviour occurring. In this blog-post I aim to discuss an intermediate state: the lived world of people with intellectual disabilities. Here my focus will be on intersubjective communication, and the social world that people with intellectual disabilities sometimes live in.

In the 1960s when a child was discovered to have an intellectual disability a lot of the time the child would be immediately abandoned to state care. Things now a days are much different, people in general have a greater grasp of what having a child with intellectual disabilities actually entails. The stigma once associated with having a child with intellectual disabilities has now been greatly eroded. Furthermore there is much greater social support for the parents of a child with intellectual disabilities than was previously available. This isn’t to imply that things are perfect, this is obviously far from the case, many challenges still exist. My point is merely that things have to a small degree improved.

When discussing the social world of a child with severe intellectual disabilities my focus will be on their social world in the institute they live in. As I mentioned earlier children are not automatically placed in care when they have severe intellectual disabilities[3]. There is a great emphasis the importance of the child staying with his family and being included in the wider community. Nonetheless caring for a child with severe intellectual disabilities, while perhaps caring for other children, is an extremely difficult job emotionally. To help parents to provide the best care possible respite services are offered to them where their child with severe intellectual disabilities can be cared for by community house which specialises in respite care.

Depending on level of care required by the child and the family circumstances respite care can begin at any age but it is typically provided for children as they get older and harder to manage. On average people will enter respite care from the age of 5 or 6. Like all children a child with intellectual disabilities is thrown into a pre-prepared world upon birth. A child is born into a family drama[4]; the parents give the child his name and typically have a room ready for the child along with fantasies about who the child will become. The child upon birth is immediately situated in a narrative created by the parents about the nature of the family and its relation to the external world. Wittgenstein correctly notes that to come to speak a language is to be immersed in a particular form of life. Different communities, societies and families have different forms of life. The new born child who is thrown into the world has to try and learn the various criss-crossing language games of the family or society they are born into. Parents of a child with intellectual disabilities often talk about a process of mourning that they undergo when they realise that the child they had imagined will never exist. They need to undergo this process of mourning their idealised child in order to get to know their actual child with his various impairments. Parents claim that eventually they just see their child as person and don’t see the intellectual disability after a while.  However there is a period where the child is not living up to the unconscious fantasy that the parents had created of him. It is difficult to speak in general about the experiences of children with intellectual disabilities as different disorders such as Williams Syndrome, Downs Syndrome, etc will result different difficulties in processing emotions, communicating etc. Furthermore, even people with particular syndromes will differ in their abilities and temperaments. So there is no one fit all criterion we can provide which will describe the challenges they will face growing up within their family these matters will have to be evaluated on a case by case basis. But while it is essential to take note of the child’s intellectual disability and particular temperament, it is just as important to note the form of life that one is to some degree imposing on the child and whether that form of life is suitable for the particular competencies of the child in question.

When the child is brought into respite care for the first time he will again be entering a prepared world where the staff working there will have a working theory of who the child is. Some of the staff (typically the nurse in charge) will have met the family to discuss the child and will be introduced to the child. The family will typically try to prepare the child for respite by telling him where he is going, showing him pictures of where he is going to respite, and bringing the child to visit the respite unit before his respite begins. The staff will prepare by reading files from psychologists, social workers, GP, reports from the child’s school and reports from the child’s parents/guardians etc.

Human nature being what it is; staff will have formed a picture of the child before he arrives. Evidence from the files and reports will help the staff form a picture of the child who is coming into respite. Staff upon hearing the nature of the disability, and facts about the child’s behaviour may view the child in comparison to various other people with intellectual disabilities they have looked after. Preparations for the child’s arrival will take place in various ways. One key issue will be what children is the child suitable to be placed with? Various social factors will have to be considered, if the child is prone to imitative behaviour placing him with other children who can be violent or engage in self injurious behaviour is not a good idea. Likewise if the child is finds noise intolerable then placing him with children who like to make loud vocalisations may not be a good idea.

When the child arrives into respite care for the first time, just like when he is brought home from the hospital, or his first day in school he is arriving at a drama prepared for him. The child is being cared for by nurses and care staff with fixed roles, the staff will have an idea of who the child is and how he will behave, and have prepared activities and environment accordingly.

Occupational Therapists have long recognised that a huge part of what makes us who we are is what we do. Our daily activities are a large part of who we are. We are not disembodied Cartesian egos, on the contrary we are embodied creatures immersed in and engaged with our world. Because of Heidegger’s focus on ‘Being-in-the World’ and his detailed phenomenological analysis of our behaviour as we engage with our lived daily experience a lot of occupational therapists and nurses incorporate aspects of Heidegger’s phenomenology in their research[5]. The importance of understanding everyday activities and how people find them meaningful in their daily activities lives cannot be stressed too much when considering the child entering respite care.

As we have already noted the child enters his new world with a place prepared for him by staff. Some children are verbal and some are non-verbal nonetheless it is important for staff and the new child to establish functional communication. If the child is non-verbal he will indicate what he wants by his behaviour, he will try to take what he wants, will get upset and may engage in self injurious behaviour if he cannot obtain what he wants. Even with children with severe intellectual disabilities can be taught to use at least some degree of symbolic communication. The PECS (Picture Exchange Communication System) which involves the child being given a book full of pictures placed on a page with Velcro is a useful system in this sense. Children who may not be able to use PECS to form actual sentences can still learn to use the pictures to communicate what they want. Lamh which is a simple form of sign language which lacks the complex syntax of ordinary sign language is also an effective way of communicating with others. However if children cannot master these devices to any degree staff will need to interpret the behaviour of the child e.g. crying is a form of communication that something is wrong. And the staff has to use whatever background knowledge he has of the child to discover what the child is.

The phenomenological approach I have sketched involves treating the child with intellectual disabilities as an intentional agent who engaged in his world and is trying to find meaning in a social world partly created by those who care for him. This approach recommends that those whose responsibility it is to look after these children with ensuring that the social environment is structured in such a manner that it is as easy as possible for the child to immerse himself in the shared form of life of his peers.

There has been some regrettable disputes about the importance of phenomenological research in occupational therapy and nursing. Some proponents of its use have claimed that the phenomenological approach is an alternative to scientific third-person accounts. This has lead to a backlash where some theorists[6] have attacked phenomenological approaches to explore the lived world of people who are ill or have a severe intellectual disability.

The approach of treating phenomenology as an alternative to scientific approaches is radically wrong-headed. When we are trying to understand the lived world of a person with intellectual disabilities we are adopting personal level explanations of their behaviour. This is how we interact with people in ordinary lived experience.  When I arrange to meet a friend for a coffee, I will use this personal level mode of explanation. Thus I will use propositional attitude level type reasoning. I will assume that because my friend knows I will be in a particular coffee shop at a particular time, and he wants to meet me, has agreed to meet me at said time, and knows how to get to the shop, he will therefore meet me unless something happens to prevent him from meeting me. We use this personal level type explanation all of the time in our interactions with others and it works quite well[7]. When dealing with children who have an intellectual disability we are dealing with persons some of whom are perfectly healthy aside from whatever developmental delay they have. Personal level interpretations are both appropriate and useful. Adopting a phenomenological approach is to interpret the behaviour of the child interms of personal level ascriptions though careful phenomenological analysis will sometimes reveal a more detailed picture than our ordinary folk intuitions do. This is not an alternative to scientific analysis of behaviour. It is just an approach which is useful and humane.

If we are trying to teach a child better functional communication a lot of the time behavioural techniques which occur at the sub-personal levels are superior to personal level techniques. Likewise if we are dealing with severe challenging behaviour, a combination of medical treatment and behavioural analysis usually works much better than any treatment at the personal level. But all of this is true of people who do not suffer from intellectual disabilities. A person with no intellectual disabilities may suffer a bout of depression and treatment interms of medicine may work better than personal level psychotherapy. Here we are using a sub-personal level explanation to help the person. The same will be true if a person with or without Intellectual disabilities gets sick with MS or any other disorder.

Personal level ways of interpreting people with intellectual disabilities are not an alternative to scientific explanations they are just standard ways of understanding how a person finds meaning in the world and particular socio-linguistic community they find themselves in. As always we can shift our levels of explanation as pragmatically needed. And leave the question of whether propositional level explanations will eventually be eliminated to future science to decide.

[1] Hence forth ‘Philosophical Investigations’ will be referred to as PI.

[2] Richard Rorty discusses the relation between the early Heidegger and the later Wittgenstein in his ‘Wittgenstein, Heidegger, and the reification of language’ (1991)

[3] Here my comments on care are restricted to practices in present day Ireland. Different countries have different procedures and practices and it would be a mistake to generalise my comments too widely without doing a lot of comparative research.

[4] Lacan whom I disagree with a lot is actually quiet good at emphasising the nature of family dramas a child is thrown into at birth.

[5] See for example Wilding and Whiteford ‘Phenomenological Research: An Exploration of Conceptual, Theoretical, and Practical Issues (2003), and Clarke (2005) ‘An Introduction to Phenomenological Analysis: A Useful Approach for Occupational Therapy Research’, as well as Crotty (1996) ‘Phenomenology and Nursing Research’.

[6] For criticisms of the phenomenological approach in nursing see John Paley ‘Phenomenology as Rhetoric’.

[7] In his 1981 paper ‘Eliminative Materialism and Propositional Attitudes’ Paul Churchland argued that we may end up eliminating propositional attitude level explanations of behaviour as we learn more about neuroscience. He may be correct on this point but at the present time we are nowhere near this point, and there is little reason to deny ourselves the tool of personal level ascriptions.

What is a Concept, and what creatures have them?

When considering the question of what a concept is one is faced with a choice. A concept is to some degree a theoretical term that we use to explain, predict and control the behaviour of various different creatures. Now as a theoretical term, a scientist can pretty much define the term as they want. An explicit definition is to be justified by its explanatory and predictive utility within a scientific theory. Now some philosophers argue against this approach because it leaves obscure the relation between the ordinary language concept of ‘concept’, and any theoretical definition of the concept of ‘concept’. Now a scientist could reply to this that it is not his problem to explain the relation between his theoretical terms and the terms of ordinary language. His job is just to construct explanatory theories which help predict and control the phenomena under discussion, if a philosopher wants to worry about the relation between theoretical terms and ordinary language terms this is their prerogative; but it is not the job of scientist to worry about such issues. In fact Dan Dennett has argued that the job of the contemporary philosopher is to help people understand the relation between the scientific image and the manifest image.

In this short blog I won’t offer an explicit definition of what a concept is. I will instead discuss areas where people typically attribute concepts to others and move down the scale towards more ambiguous cases where it is unclear whether attributing concepts is justified. My central message will be that attributing concepts is not something that is done according to a clear cut criterion. There is no magical metaphysical cut off point where creatures suddenly become concept users. There are rather areas where we have less or more justification for attributing conceptual abilities to others.

One area where there is no disagreement on whether we are justified in attributing concepts to others is in ordinary mature language using humans. If a person has a language then everyone agrees that they also have concepts. The reason we feel justified in attributing concepts to people who have a language, is because through tracking their linguistic behaviour, it is possible for us to determine their conceptual abilities. Thus if we have a person who is speaking about Dogs we can determine what their concept of a Dog is by questioning them. The person can answer a variety of different questions about their beliefs on the nature of dogs and from this we can rationally reconstruct what their implicit concept of a Dog is. The fact that a subject can provide justification for his beliefs about Dog’s gives us clear evidence about the nature of their concept of a Dog. This ability to justify and respond to questions by giving reasons is a more rigid criterion than merely being able to discriminatively distinguish between objects in one’s environment. As people like Bob Brandom have noted, if we used the ability to discriminatively distinguish between objects in our environment as our criterion for attributing concepts to others, then we would be forced to attribute concepts to rusting metal, thermostats, flowers etc. So to avoid this overly broad criterion where we are forced to attribute concepts to any entity no matter how dumb, some philosophers have argued that we should limit the attributing of concepts those creatures that have a language.

This move of limiting conceptual abilities to language users (hence to human beings) is typically justified by an appeal to the philosophical views of Ludwig Wittgenstein. This appeal to Wittgenstein’s philosophy as providing a clear dividing line between which creatures have concepts and which creatures do not is ironic. It is ironic because it portrays Wittgenstein as a kind of Platonist who is providing necessary and sufficient conditions for something to count as being a particular concept, and of demonstrating that only language users have the essential criterion necessary for one to be justified in saying they are concept users. This quasi Platonist approach is the opposite of the approach adopted by Wittgenstein so it is very ironic that his name is so often attached to this Platonist world view.

One of the many novel aspects of Wittgenstein’s ‘Philosophical Investigations’ is that he attacked the idea that concepts are the type of thing which have essential features:

Consider, for example, the activities that we call “games”. I mean board-games, card-games, ball-games, athletic games, and so on. What is common to them all?- Don’t say: “They must have something in common, or they would not be called ‘games’” – but look and see whether there is anything common to all. – For if you look at them, you won’t see something that is common to all, but similarities, affinities, and a whole series of them at that. To repeat: don’t think, but look!-Look, for example, at board-games, with their various affinities. Now pass to card-games; here you will find many correspondences with the first group, but many common features drop out, and others appear. Now we pass next to ball games, much that is common is retained, but much is lost-Are they all entertaining? Compare chess with noughts and crosses. Or is there always winning and losing, or competition between players? Think of patience. In ball-games, there is winning and losing; but when a child throws his ball at the wall and catches it again, this feature has disappeared. Look at the parts played by skill and luck, and at the difference between skill in chess and skill in tennis. Think now of singing and dancing games; here we have an element of entertainment, but how many other characteristic features have disappeared! And we can go through the many, many other groups of games in the same way, and can see how similarities crop up and disappear.

And the upshot of these considerations is: we see a complicated network of similarities overlapping and criss-crossing:  similarities in the large and in the small.” (Wittgenstein: Philosophical Investigations 67)

Wittgenstein’s conception of concepts interms of family resemblances has found some empirical support in the work of Eleanor Rosch (proto-type theory) (1970) and George Lakoff (1987). If we adopt this anti essentialist approach we are left with a view where the concept of ‘concept’ is itself a crisscrossing network of family resemblances with no clearly defined essential features located in Platonic heaven.

When we adopt this approach, we are stepping into a similar world as the world of biology, where we stopped asking silly essentialist questions like “when exactly does one species change into another species?” a long time ago. Likewise the question at what moment exactly do concepts appear on the scene makes little sense. This isn’t a matter of adopting a simple “yes” “no” approach but is a matter of degrees of use in adopting intentional level descriptions.[1]

To see the advantage of adopting this approach let us consider the approach we adopt to people with Schizophrenia and various forms of Aphasia. To Platonic essentialists having a concept is a clear cut thing. If you have no language you have no concepts. The cut off point is language and there is little more to be said on the issue. Unfortunately for these Platonists reality doesn’t fit into this neat little binary opposition. The world of biology is a messy one. And these Platonists philosophers need to wake up to the fact that reality doesn’t always conform to their simple a priori intuitions.

Let us consider a case of a language using adult who as a result of a stroke has a form Aphasia which results in the person no longer being able to understand or speak a language. Now imagine[2] that this adult still has the ability to find his way around his environment, he recognises his family members, and responds to them in a similar though diminished manner. Now for the Platonist once this person loses his linguistic abilities as a result of a stroke we are forced to say that the person no longer has any concepts. This approach though seems odd. Say (as is quiet common), such a person still likes the same food, the same music, recognises his wife, his children, family pet etc. This person according to the Platonic essentialist has no concept of his wife, or of his children etc. At best we can say that the person is a reliable discriminator of objects in his environment in the same way as a thermostat is.

The above approach is not logically contradictory. Nonetheless there seems little point in adopting it. The fact that the person makes similar categorical discriminations post stroke,  than they did pre-stroke, and that we can predict the persons behaviour by adopting the intentional stance and attributing conceptual abilities to them seems reason enough to do so. And there is little reason to deny concepts to people with aphasia in the same way we would with people with severe Alzheimer’s who near the end do not seem to have any concepts what so ever. In short there is little pragmatic reason for adopting the approach a person with aphasia, and even less theoretical gain in doing so. None of this is meant as a proof but is rather meant as a reminder of the way we typically apply these concepts.

For a further exercise in imagination stretching consider the case of a person with schizophrenia who is speaking in word-salad. Now there is a long tradition of a psychiatrists and psychoanalysts trying to interpret the language used by a person engaging in word-salad as a coherent story with meaningful concepts. In a famous case Daniel Paul Schreber wrote a book ‘Memoirs of My Mental Illness’ while undergoing a psychotic break. On a straight reading the book is senseless word salad which is intermingled with bits of sense here and there. Nonetheless people like Freud, Lacan, and Matte-Blanco (using the apparatus of psychoanalysis) have managed to give the book a coherent intentional level explanation of Schreber’s book. In more recent times the clinical psychologist Louis Sass has offered an intentional level explanation of Schreber’s book using different theoretical apparatus (phenomenological apparatus).

It should be noted that the intentional level explanations of Schreber’s work remains at the level of a description. And as far as I am aware we have no way of deciding which (if any) of the interpretations of Schreber’s work is the correct one. Furthermore we have no evidence that therapy using either psychoanalytic techniques or the phenomenological approaches above are in anyway successful in treating schizophrenia on its own. Typically the best approach to treating schizophrenia is to use a combination of medical treatment combined with some kind of talking therapy.

The reason I bring up schizophrenia is because it raises interesting issues for Platonic Essentialists about concepts. Does a person who is undergoing a psychotic break have concepts at the time of the psychotic break? Well if the person has a language then according to the Platonic Essentialist he has concepts. But what counts as having a language for the Platonist? Here I think their strict criterion falls by the way side. Language is not a single thing. It consists of multiple interacting components: Syntax, Semantics, Morphology, Phonology, Pragmatics etc. There are mountains of empirical studies into the deviance in syntax, semantics and pragmatics in the word salad type utterances of people with schizophrenia. One wonders which essential components must fail before the Platonist will decide that the person with schizophrenia is no longer a competent language user and hence that we are not at that time justified attributing concepts to him?

On the loose naturalistic approach that I am adopting (basically Dennett’s stance-stance approach) these strict yes, no questions can be dropped. We know that talking therapy[3] can be quiet effective when combined with drug treatment. This fact alone justifies us in adopting intentional level interpretations to people undergoing a psychotic break and treating them as concept users. Appeal to arcane philosophical intuitions about some monolithic entity called language and its importance do not decide the issue. Rather our ability to predict, and hence manage the illness is what matters.

Similar ambiguities arise for things like Williams Syndrome and Specific Language Impairment etc. The fact is that the world is a messy place and the narrow binary intuitions that Platonists appeal to are not very helpful in dealing with such issues. The loose pragmatic approach is the best way of dealing with the complexity of the empirical world and avoids the trivial simplicities of the narrow Platonism I have described.

A critic could argue that the loose pragmatic approach I am adopting here will still result in a kind of free for all where we end up attributing concepts to thermostats, rusting metals etc. I am not sure that this fear is entirely justified. I am not arguing for an anything must go approach to intentional ascriptions. In his (1991) paper ‘Real Patterns’ used the work of mathematician Gregory Chaitin to specify precisely when we have discovered a real pattern. Dennett glosses it as follows:

“Of course, there are bound to be other ways of describing the evident patterns in these frames, and some will be more efficient than others-in the precise sense of being systematically specifiable in fewer bits.” Any such description, if an improvement over the bit map, is the description of a real pattern in the data.” (Dennett: Real Patterns p.33)

Using Chaitin’s approach we can distinguish between purely pragmatic intentional ascriptions to things like thermostats and when we are actually picking out real patterns in the environment. So we avoid the problems which people like Brandom predict about being overly permissive in our attribution of concepts to others if we theorise that creatures that are non-linguistic have concepts.

Finally I have been quiet critical of people who argue that only people with a language have concepts. I have criticised their approach as being too essentialistic. However I think that their scepticism about attributing concepts to non-linguistic animals is very important. This scepticism helps by constantly calling into doubt the constant assumptions of those who are certain that some non-human animals have concepts. This process of criticism can only be a good thing and helps avoid complacent theorising unconstrained by the facts.

[1] To see the Intentional Stance approach in more detail see Dennett ‘The Intentional Stance’ (1987) and Dennett ‘Real Patterns’ (1991).

[2] This is an imaginary case but cases of people with Aphasia which obliterates their language production and comprehension but do not significantly affect a person’s general practical ability to negotiate his way around his social and physical environment are quiet common.

[3] Both Cognitive Behavioural Therapy and Psychodynamic Approaches show similar levels of success when combined with drug based interventions.

Dennett and the typical mind fallacy




Gradually it has become clear to me what every great philosophy so far has been: namely, the personal confession of its author and a kind of involuntary and unconscious memoir.  (Nietzsche: Beyond Good and Evil Sec 6)


In this paper we will argue that Dennett’s views on the nature of the mind are based on him unconsciously projecting his own idiosyncratic type of mind onto all other people.  We will argue that he is guilty of what William James called “The Typical Mind Fallacy” of assuming that his type of mind is representative of all types of mind. We will show that Dennett has a particular type of consciousness and that his type of consciousness is not representative of the consciousness of all other people. We will demonstrate that Dennett has a “Bodily-Linguistic Type of Consciousness”.  However, prior to doing this we will sketch the typology which we are using to interpret the mind of Dennett and what evidence we have to support the existence of this particular typology.

We are working with the typology which David Berman invented in his 2008 book Penult. This typology divides people into three main divisions: (1) Type 1: Mentalistic Consciousness, (2) Type 2: Bodily Consciousness, (3) Type 3: Socio-Linguistic Consciousness. The evidence for this typology comes from the variability people have in their mental abilities. Some clear examples of mental variability which have been confirmed are:

(1)Galton: Provides evidence that people have different abilities to form mental imagery. See also Kosslyn, James etc.

(2) Dalton’s discovery that some people are colour blind.

(3) Some people have synesthesia; this fact was denied for years but is accepted now.

(4) There are variations in how people experience pain. See, for example, Roger Filligim (2005) “Sex and Gender Issues in Pain”, as well as Young et al. (2008) “Genetic Differences in Pain Variability”.

The above four examples show variation in people’s cognitive abilities and subjective experiences which went undetected for years before being discovered by empirical research. We suggest that such variation in mental abilities inadvertently influences the theories of the mind which philosophers construct. So, for example, there is evidence that people’s psychological abilities affect the philosophical theories they accept; see Berman “Philosophical Counseling for Philosophers” (2013), as well as William James “Principles of Psychology” (1892), and Holtzman “Do Personality Effects Mean That Philosophy is Intrinsically Subjective”? (2013).

Showing that there is variation in people’s mental abilities and that these variations influence the type of theories which philosophers accept is one thing, however it is not evidence for Berman’s Tripartite Typology. In his Penult Berman uses evidence from the history of philosophy, evidence from interviews with living subjects, as well as his own introspective experiences to justify his Typology. His Typology makes sense of the views of the great philosophers throughout history. It also shows why so many of the great philosophers, in the past despite being great thinkers, could not agree with each other about the nature of the mind. All of these philosophers were operating on the mistaken assumption that there is only one type of mind. The diagram below shows how Berman divides the philosophers of the past according to his Tripartite Typology.


Type 1  







Type 2 Type 3






Dennett (major2/minor3)











Berman’s Typology


Type 1 philosophers have a strong experience of their own consciousness; this experience is called unbounded consciousness. Type 2 philosophers have no experience of unbounded consciousness; their experience is entirely bodily and linguistic. Type 3 is a socio-linguistic type of consciousness. The various different types of consciousness can be blended, with one of the types being dominant. So, for example, Berman argues that both Rorty and Dennett are a blend of Types 2 and 3; with Dennett Type 2 is dominant, while for Rorty Type 3 is dominant (Penult p. ).

Berman’s three main psychological types are vividly represented by Rene Descartes (Type 1), William James (Type 2), and Richard Rorty (Type 3). The following quote from Descartes illustrates his Mentalistic abilities:


 When I consider the mind—i.e. consider myself purely as a thinking thing—I can’t detect any parts within myself; I understand myself to be something single and complete. The whole mind seems to be united to the whole body, ·but not by a uniting of parts to parts, because if a foot or arm or any other part of the body is cut off, nothing is thereby taken away from the mind. As for the faculties of willing, understanding, of sensory perception and so on, these are not parts of the mind, since it is one and the same mind that wills, understands and perceives, They are (I repeat) not parts of the mind, because they are properties or powers of it.  (Descartes: Meditations on First Philosophy p. 11)


The above quote illustrates that, as a Type 1 philosopher, Descartes experiences himself as primarily a mental entity. James’ experience of himself is entirely different:[1]

Let the case be what it may be for others, I am as confident as I am of anything that, in myself, the stream of thinking (which I emphatically recognize as a phenomenon) is only a careless name for what, when scrutinized reveals itself to consist of the stream of my breathing. (James 1904)

There is I mean no aboriginal stuff or quality of being, contrasted with that of which material objects are made, out of which our thoughts of them are made, but there is a function in experience which thoughts perform…(namely)…knowing. Consciousness is supposed necessary to explain the fact that things not only are, but get reported, are known. (James: 1904 p.101)

Everyone assumes that we have direct introspective acquaintance with our thinking activity as such, with our consciousness as something inward contrasted with the outer objects which it knows. Yet I must confess that for my part I cannot feel sure of that conclusion. Whenever I try to become sensible of my thinking activity as such, what I catch is some bodily fact an impression coming from my brow, or head, or throat or nose. It seems as if consciousness as an inner activity were rather a postulate than a sensibly given fact. (James: 1892: Text Book of Psychology p. 467)

James above is denying that we introspectively experience consciousness; his distinctive subjective experience is entirely different from Descartes’. While Descartes and James are good exemplars of Type 1 and Type 2, Richard Rorty is a good exemplar of Type 3. Here some Rorty quotes illustrate his type of mind:

The temptation to look for criteria is a species of the more general temptation to think of the world, or the human self, as possessing an intrinsic nature, an essence. That is, it is the result of the temptation to privilege someone among the many actual and possible languages in which we habitually describe the world or ourselves (Richard Rorty: Contingency, Irony and Solidarity p. 15)

This is that there is nothing deep inside each of us, no common human nature, no built in solidarity, to use as a moral reference point… To be a person is to speak a particular language, one which enables us to discuss particular beliefs and desires with particular sorts of people. It is a historical contingency whether we are socialized by Neanderthals, Ancient Chinese, Eton, Summerhill, or the Ministry of Truth. (ibid p 177)

We can see from the above quotes that Rorty is incapable of conceiving of himself independent of language. This is a clear indication that he is a Type 3 thinker.

In his Penult Berman goes into much more detail in defending his typology. Here we have shown what the evidence for the typology is and how it explains the philosophical views of various great philosophers. Throughout this paper we will analyse Dennett’s writings in detail to show that he is a combination of a Type 2, and a Type 3 thinker and that he incorrectly generalises his type of mind to be representative of all minds. We will analyse Dennett’s views on Mental Imagery, Dreams and Pains and show that his views on these topics are derived from his own subjective experience of these phenomena.


We will make the case that Dennett’s philosophy of mind is partly derived from his own idiosyncratic psychology. This claim portrays Dennett as having experiences which are radically different from the experiences of other philosophers. This leaves the question of how Dennett has developed the type of mind that he in fact has. Was his type of mind derived from his genetic code, or did socialisation play a more important role? Any answer to this question will be highly speculative as there is little written about Dennett’s upbringing and obviously we have no information about the structure of his genetic code. However, trying to understand Dennett’s life and to what extent his experiences influenced the type of mind he developed will help to make sense of his strange views on the nature of the mind. Dennett wrote a small auto-biography detailing his life experiences. Discussing this auto-biography will help us understand his type of mind better.

Dennett’s brief discussion of his early life and his early education in philosophy is interesting but fairly sparse in details. Certain things do stand out though. Dennett’s early loss of his father was obviously a traumatic experience for him. According to Dennett, he was expected to follow his father into academic life in the humanities. So while Dennett’s father was no longer alive he was present to Dennett as an idealised figure. His father was an idealised figure who people expected Dennett to live up to and to follow into the humanities. Dennett was interested in engineering and loved building things yet was expected to go into the humanities. This situation would create resentment in most teenagers. His own dreams and aptitudes were subordinated to the expectations and wishes of his parents. He managed a minor rebellion against his family’s wishes by not going to Harvard. He instead went to Wesleyan University. He studied Maths and English. It was in his advanced Maths Class that he first discovered Quine’s work when he read his book ‘Mathematical Philosophy’. When Dennett first read Quine’s “From A Logical Point of View” he claimed he was impressed but found something wrong with Quine’s position.  So Dennett contacted Quine and transferred to Harvard. It is worth noting that after less than a year Dennett’s rebellion against his parents’ wishes had ended. He was studying where they wanted him studying. Furthermore, he was studying philosophy a humanities subject. So Dennett’s discovery of Quine was doubly beneficial for him; he had found a thinker who really interested him and he had found a way of pleasing his family (in particular his dead idealised father). Another benefit which the discovery of Quine had for him was that it helped Dennett keep both his family and himself happy simultaneously. By studying Quine he would be studying a humanities type subject (keeping his family happy), while Quine’s interest in natural science and formal logic chimed well with  Dennett’s scientific/engineering interests.  So it is understandable that Dennett was excited by the discovery of Quine, as it helped him to fullfill his own desires and his deep need to live up to his father’s expectations.

Dennett eventually wrote his undergraduate thesis on Quine. He called the thesis “Quine and Ordinary Language”. There are no details of the thesis available though Dennett mentions that it was critical of Quine and that Quine thought highly of it. Dennett said that at this stage of his development, he, unlike most of his fellow students, held Ryle’s work in high regard. He thought of Ryle’s “The Concept of Mind” one of the best philosophy works he had read. Dennett finished college at the age of twenty and at this early stage of his development his primary influences were Ryle and Quine, both of whom were behaviourists. So it is tempting to argue that Dennett’s strange eliminativist views on the mind resulted from his being educated by behaviourists: but this cannot be correct both Nagel and Kripke were undergraduates with Dennett, and they certainly did not end up sharing Dennett’s view of consciousness. So we need something more than the influence of Dennett’s teachers to explain his views on consciousness. Something in Ryle and Quine chimed with Dennett.  We argue that their theories seemed intuitively correct to him because they rang true to him at an experiential level. Their theories of the mind rang true to his own lived experiences. Those with different types of mind, for example Nagel and Kripke, found Ryle’s theory counter-intuitive because the way they experienced the world was so different than Ryle or Quine’s descriptions of the mind.

Because he thought so highly of Ryle Dennett went to Oxford to study under him. Ryle was the supervisor of Dennett’s doctoral thesis which eventually became his book “Content and Consciousness”. Dennett notes that he saw problems with qualia as early as 1963 and even then he believed that our conscious experience only seemed to be as rich as we believed it was. So we can see that from a very young age he already felt that the philosophical community was deeply wrong about the nature of the mind. He would spend the next fifty years developing arguments and finding empirical evidence which supported his intuitions on the nature of the mind.  Pretty much from day one in Oxford he tried to inform his theories of the mind with the best neuroscientific evidence available. His PhD was so deeply immersed in neuroscience that a neuroscientist was asked to examine his thesis. Dennett had managed to follow in his father’s footsteps while still getting to study the science which so interested him.

The most important thing to note about his description of his early childhood development is the fact that from the age of seventeen when he was first exposed to philosophy he was well-disposed to the philosophies of Quine, and Ryle. This indicates that he saw something in their behaviouristic explanations of mind which chimed with his own experience. He felt that ‘qualia’ was a useless theoretical term as early as the age of twenty-three. His entire philosophical life was dedicated to providing philosophical arguments and scientific evidence to support his intuitions of the mind.

What appealed to Dennett about Ryle and Quine was their emphasis on language and behaviour. Subjective experience was not important to either Ryle and Quine and they believed that language played a central part in making us human. Dennett being primarily a linguistic-bodily thinker would have related to Ryle and Quine perfectly.


Dennett’s  theories of mind and consciousness have always emphasised the importance of language for thought and experience. In “Consciousness Explained” Dennett described the self as follows:

Selves are not independently existing soul-pearls, but artifacts of the social processes that create us, and, like other such artifacts, subject to sudden shifts in status. The only “momentum” that accrues to the trajectory of a self, or a club, is the stability imparted to it by the web of belief that constitute, it, and when those beliefs lapse, it lapses, either permanently or temporarily. (Consciousness Explained: p 423)

We suggest that Dennett’s description is of his own type of mind and not of the minds of all people. Language is such a deep part of his self that he is incapable of thinking independently of it. We will provide evidence for this claim by examining his views on Mental Imagery, Dreams, and Colour etc.

The importance of language for Dennett becomes even more apparent when one considers his theory of consciousness. Dennett’s theory of consciousness which in 1996 he dubbed the “Fame in the Brain Model” is meant to replace the tempting idea of the “ Cartesian Theatre”. Throughout his philosophical career Dennett has attempted to replace the “Cartesian Theatre” picture with increasingly more apt models, from his “Multiple Drafts Model” to his “Fame in the Brain” model to his “Fantasy Echo Model”. With his Fame in the Brain model he argues that consciousness is more like fame than television. It is not a special “medium of representation” in the brain into which content-bearing events must be “transduced” in order to be conscious (Sweet Dreams: p. 160). He argues that it is more like fame, where one content bearing event achieves something like fame in competition with other fame-seeking events. This metaphor is obviously imperfect. Fame is typically an intentional object of a number of agents (ibid, p.161), and he concedes that it makes little sense to think of multiple homunculi all holding some content as an intentional object and elevating it to fame. So he instead opts for the less troublesome metaphor of consciousness as “influence in the brain”.

For Dennett conscious events are attention-grabbing events. They are thoughts that persist in the mind for a sustained period of time. To help us think about this fact he asks us to think about them in terms of fame and how long a period a person can be famous for. Dennett plausibly argues that a person cannot be famous for a few seconds; likewise, a person being viewed by millions of people on television will not necessarily be famous. No matter how many people may see me in the background in some sitcom this will not make me famous. To be famous I would have to be a person whom others talked about enough to ensure that my presence reverberated within the stream of public discourse. To achieve this importance within the public stream of discourse takes time; it does not happen instantly. Some may argue that it in fact does occur instantly. They will point to people who seem to achieve instant fame as a result of some event, for example, an assassin of a president may achieve a level of instant fame. Nonetheless, despite appearances, instant fame does not really occur. It is something which is conferred post-hoc as a result of the influence the event has on the public discourse of others. Without the reverberations within the linguistic community the shooter would not achieve fame. Dennett argues that consciousness has similar features to the type of fame that is achieved in the external world.

For Dennett, fame in the brain, like fame in the world, requires echo-making capacities. To say that something is conscious even though it doesn’t have any echo-making properties within the mind is for Dennett senseless. Dennett uses the example of the aroma of classroom library paste as something that has echo-making properties for him. The aroma has strong echoic properties because it evokes various vivid childhood memories and associations for Dennett.

To be conscious is precisely to have a thought that yields enough influence to direct a lot of homunculi to engage with it, he argues that fame is like this too. He also notes that the nature of fame has changed with the invention of electronic media:

“ a recursive  positive feedback became established, dwarfing the initial triggering event and forcing the world to wallow in wallowing in wallowing in reactions to reactions to reactions to the coverage in the media, and so forth.” (ibid p. 167)

He thinks that such fame was very difficult to achieve in the pre-electronic age. He argues that language did for human consciousness what electronic media did for fame: it transformed it.  It is for this reason that he calls us ‘Joycean machines’. The recursive capacity which is a central feature of natural language is one of the key features of consciousness. Dennett even makes the speculative prediction that non-human animals may have little echoic capacities so strictly speaking will not have consciousness. Dennett’s view on animal consciousness is definitely a minority position. The majority of scientists now accept that animals are conscious. Obviously, science is not a majority rules activity so the fact that Dennett’s position is not accepted by many does not prove that it is incorrect.  The primary reason that people argue that non-linguistic animals are conscious is because of their complex behavioural capacities. Behavioural tests have shown that some animals have complex concepts of number, agency, causality etc. However, Dennett points to the fact that people often reason with these concepts unconsciously and asks whether we can be sure that animal’s competence with the above concepts may entirely involve unconscious reasoning. He also notes that we cannot be sure that animal’s behaviour isn’t a kind of blind-sight. Such animals would be able to predict certain aspects of their environment without having any conscious experience of doing so. It is worth noting though that some mammals also respond to anti-depressants and anti-anxiety medications in a similar way that humans do. This indicates that their experiences are similar to ours. There is little to recommend the view that such animals are suffering from total blind-sight in all of these areas, though it is admittedly theoretically possible.

Dennett’s speculation that non-linguistic animals may not be conscious is not central to his theory of consciousness. If his claim about the consciousness of non-linguistic animals not being conscious were refuted (and I think it will be), this would not refute his theory of consciousness as influence in the brain. Non-linguistic animals may have basic recursive capacities that make the type of conscious experience which Dennett talks about possible.

At this point I should clarify what Dennett means by recursive capacities. In an obvious sense Dennett cannot mean that recursion is entirely unique to humans. Dennett accepts the computational theory of the brain; he also accepts that the computational procedures govern animal brains. It is a fair bet that the computational procedures which govern human and animal brains sometimes involve recursion. Dennett does not deny this obvious fact about the structure of animal brains. When he says that human language has a unique recursive structure he is speaking of recursive structure as a kind of software. He calls this software a virtual Von-Neumann machine (Consciousness Explained, p.201 ). Dennett argues that the mind is like a software programme installed on the parallel neural networks of the brain ( Densmore and Dennett 1996 p .1). The software is installed onto the hardware of the brain through memes. Our linguistic abilities give us the capacity to understand and produce a potentially infinite amount of sentences. Through communicating with each other humans can learn different ways of thinking about the world. Our culture, made possible by our linguistic capacities, means that every child born into a linguistic community will have the capacity to use the tools for thinking which were created by previous generations, such as mathematics, logic, the scientific method etc. These tools, when learned, install programmes which can give brains the capacity to think in ways which are far superior to the ways animals unaffected by culture are capable of thinking. This is analogous to the way in which some software (e.g an app), when installed on a hardware,  gives the hardware capacities which it previously did not have.

We can see from the above description that Dennett’s conception of consciousness is strongly tied up with  linguistic competence. This equating of consciousness with language is so strong that he goes as far as to deny that animals are conscious. Dennett’s equating of consciousness with language is at odds with a lot of neurological data. Yet he stubbornly persists in claiming that language and consciousness are intimately connected. A possible explanation for his position is that he is unconsciously projecting his own mental abilities onto those of others. To analyse this contention we will examine his overall views on consciousness. In Part 1 we will examine his views on mental imagery, and dreams. In Part 2 we will examine his views on the nature of pain. We will show by analysing Dennett’s conception of mental imagery, dreams, and pain that he is guilty of unconsciously commiting the typical mind fallacy.



We claim that Dennett is a non-imager whose views on the nature of mental imaging are derived primarily from his own idiosyncratic psychological abilities.  We argue for this conclusion by considering various psychological tests which indicate that people do indeed have differing abilities to form mental imagery, ranging from non-imagers to eidetic imagers. We will analyse Dennett’s various different discussions of mental imagery and show through textual analysis where he indicates that he may be a non-imager. We will compare Dennett’s reports of his imaging abilities with those of other people with various different levels of imaging abilities. Overall we will conclude that it is highly probable that Dennett is a non-imager.

                  Section 1: Dennett: Experience is not what you think it is

Dennett discusses pain in the context of a general evaluation of our supposed phenomenological world in Chapter 2 of Consciousness Explained. Dennett proposes a number of experiments, which are designed to show us that our knowledge of our phenomenological world is less accurate than we assume. He proposes an experiment on mental images p. 27, one on our sense of touch p. 47, one on sound p. 49, and one on vision p. 54, as well as the experiment on pain p. 60. Dennett argues as follows from his experiments:

Did anything you encountered in the tour of the phenom in the previous chapter surprise you? Were you surprised for instance that you could not identify the playing card until it was almost dead centre in front of you? Most people, I find, are surprised-even those who know about the limited acuity of peripheral vision. If it surprised you, then that must mean that had you held forth on the topic before the surprising demonstration, you would very likely have got it wrong. People often claim a direct acquaintance with more content in their peripheral visual field than they in fact have. Why do people make such claims? Not because they directly and incorrigibly observed themselves to enjoy such peripheral content, but because it seems to stand to reason…Am I saying that we have absolutely no privileged access to our conscious experience? No, but I am saying that we tend to think we are much more immune to error than we are.  (Consciousness Explained p.68).

The card experiment is easily testable because the card exists in the external world. So when we ask people what card they are seeing we can test their claims against a fact in the external world. Asking people about things like mental images is different; when people make claims such as ‘I am a vivid imager’ we cannot easily test whether such people are guilty of exaggerating their experiences like we can with the card test. Likewise with people’s experience of pain.  Dennett is asking us how we can know that people are not spinning stories based on theories they hold as opposed to observing internal objects?  However, let us assume that his comparison of the card trick with our internal experiences of mental images, and felt experiences of pains, is valid. What would this mean for our theory? Suppose that in the card experiment some of us discovered that our visual experiences were not as accurate as we believe they are, we did not discover that we have no experiences, our experience of the card at the centre of our visual field was ok. It was only our peripheral vision which was affected. If our introspective experiences are analogous to our perceptions and the same errors occur then this means that we have to be careful with our introspection, not that introspection is somehow inherently flawed.

Section 2: Mental Imagery

William James and Francis Galton famously warned us against making the typical mind fallacy, of wrongly assuming that all people have minds which are the same as ours. Galton’s breakfast table questionnaire showed that people have differing abilities to form mental images, ranging from non-imagers to weak imagers to strong imagers to eidetic imagers. Recently psychologists Reisberg, Pearson and Kosslyn’s paper “Intuitions and Introspections about Imagery: The Role of Imagery Experience in Shaping an Investigator’s Theoretical Views” demonstrated that people who were working on the imagery debate were influenced by their own imaging abilities as to what views they held on the nature of imagery. Their study showed that, as scientists did more experiments and learned more about the topic, their imaging abilities played less of a role in what their views on the topic were because they were more influenced by the scientific evidence. Kosslyn however noted that a small percentage of those studied were not swayed by the scientific evidence:

In addition, the VVIQ scores were correlated with the responses for the views in 1980: the results indicated that researchers who reported less vivid imagery were more inclined toward the propositional view. Thus, in the early stages of the debate, researchers’ personal experiences were related to their theoretical stances. In contrast, when the VVIQ scores were correlated with their current views, the scores were found not to be correlated with their current attitude. This finding suggests that scientists really do pay attention to the data and let the data take precedence over their introspections or images. As results from increasing numbers of imagery experiments were reported and the case for depictive representations grew stronger, even many of those with poor depictive imagery became convinced that depictive representations exist and are used in imagery. Nevertheless, some of the extreme cases-who reported no imagery-persisted in denying the existence of mental imagery. (Kosslyn et al: The Case for Mental Imagery p 181)

Kosslyn claims that in some extreme cases people who have no imagery persist in denying the existence of mental imagery. In an online article for Kosslyn argued that Zenon Pylyshyn was a non-imager whose views on the imagery debate were strongly derived from his lack of ability to form mental images. We aim to demonstrate in this paper that Dennett’s views on mental imagery can also be explained by his being a non-imager.

Dennett typically interprets people’s claims that they think using mental images as being nothing more than a theorists fiction; in other words, a metaphor[2]. The reason he gives for this belief is partly based on his heterophenomenological method and his theoretical belief that, since people have no internal eyes, then having mental images is impossible. We do not doubt that some of Dennett’s reasons for treating mental images as metaphors are theoretical, however we argue that the primary reason Dennett treats mental images this way is because he is a non-imager. This is not meant as any kind of personal slight on Dennett; both of the authors of this paper are extremely poor imagers. In fact one of the authors of this paper, David Berman, has written a paper “Philosophical Counselling for Philosophers” (2008), which recounts how his own philosophical views were inadvertently affected by the fact he is a poor imager.  It is vitally important that such facts are discovered if we are to ascertain the truth rather than merely assume that all thinkers’ minds are the same.

In his “Heterophenomenology Revisited” (2007) Dennett makes clear his views on mental images and his view that they are mere metaphors which describe neurological facts we do not yet understand. I will quote his views in this paper in full as they are very instructive:

The standard presumption is that ‘’I know because I see it’’ is an acceptably complete reply when we challenge a reporter, but where a subject is reporting on mental imagery, for instance, we create an artefact. We ask a subject to tell us how many windows there were in the front of the house he grew up in, and he closes his eyes for a moment and replies ‘‘four’’. We ask: How do you know? ‘‘Because I just ‘looked’… and I ‘saw’ them!’’ But he didn’t literally look. His eyes were closed (or were staring unfocused into the middle distance)…When we confront this familiar vacuum, there is an almost irresistible temptation to postulate a surrogate world-a mental image-to stand in for the part of the real world that a reporter observes…The ‘’recollected image’’ of the house has a certain richness and accuracy that can be checked, and its limits gauged. These limits give us important clues about how the information is actually embodied in the brain, however much it seems to be embodied in an ‘’image’’ that may be consulted. This is where the experimental work by Shepard, Kosslyn, and Pylyshyn and many others comes into play. (Hetrophenomenology Revisited. Pp. 256-257).

One important point to note is that Dennett cites the experimental work of both Kosslyn and Pylyshyn as a way of finding out what is actually going on in the brain. This is interesting because of course Kosslyn is a defender of mental images, while Pylyshyn uses his experiments to cast doubt on the existence of mental images. This may give the impression that Dennett is open to the existence of mental images, that he is merely doubting them for the purposes of collecting heterophenomenological data, which he will later subject to experimental tests. However, his discussion of Shakey the robot shows that Dennett actually subscribes to the more radical view that mental images do not exist.

Shakey is a robot that Dennett introduced into Consciousness Explained for a thought experiment. Shakey is an actual robot that was built in 1966 and that is programmed to move boxes in the room.    Below (fig.  1) is a picture of Shakey as he appeared in 1966 when he was first built:

Fig 1 Shakey

Shakey cannot speak, however Dennett speculates that if we could programme it with a language we could programme it to report that it moved boxes by rotating mental images of the box in its mind’s eye. We as designers know that Shakey does not actually rotate mental images in his mind before moving the boxes.  Dennett suggests that something similar occurs when humans report rotating mental images:

If a human subject says she is rotating a figure in her mind’s eye, this is a claim about what it is like to be her, and indeed she intends to describe, unmetaphorically and accurately, a part of the real world (a part that is somehow inside her) as it seems to her. But she is speaking at best metaphorically and at worst creating an unwitting fiction, for we can be quite sure that the process going on in her head that inspires and guides her report ( to put it as neutrally as possible) is not a process of image rotation. (ibid, p. 258).

Here we can see clearly that Dennett, despite his claims of neutrality, is denying that people really have mental images. The above statement casts interesting light on a recent comment on his imaging abilities that Dennett made:

And when I do these mental gymnastics I make heavy use of my imagination, exploring various diagrams and pictures in my head, for instance. In short I exploit what Descartes would disparage as merely my imagination to accomplish what he would celebrate as my conception. (Intuition Pumps And Other Tools For Thinking: p. 289)

Here Dennett is claiming that he makes heavy use of mental images to think. However, when placed alongside his views on Shakey it is obvious that he cannot be claiming that he experience mental images. Suppose I see what I think is a ghost on front of me. If it is afterwards shown that what I thought was a ghost is merely a projected image created as a hoax this will affect my beliefs about the nature of what I saw. However, it will not change the fact that I did indeed experience something. The case of Shakey is different, according to Dennett. We and Shakey may sincerely report rotating a mental image in our mind’s eye, however we do not actually rotate the mental object or experience ourselves doing so; we merely report doing so. It is inconceivable that Dennett could subscribe to the view that all we are doing is giving a verbal report of something that we do not experience, if he in fact experienced mental imagery. We suggest that based on Dennett’s views on the nature of imagery, his casual remark about thinking in imagery must be merely metaphorical.

Dennett spends a lot of Hetrophenomenology Revisited arguing that he is neutral about the reality of the descriptions people give of their internal world. However, his manner of characterising people’s reports in terms of a fiction contradicts this supposed neutrality. His discussion of after-images nicely illustrates this point:

Just as the fictional Sherlock Holmes can be correctly described as taller than the real Max Beerbohm, the fictional red stripes on your afterimage can be correctly described by you as somewhat more orange than the real red stripes on the flag flying outside the window. Fiction is parasitic on fact, and the afterimage stripe is red in exactly the same way that Holmes is tall.  (ibid, p.263).

So here we see that Dennett, despite his claims of neutrality is in fact denying the reality of afterimages. We suggest that there are two sources for his scepticism: (1) a theoretical commitment to materialism, and (2) the fact that he is himself non-imager. We will primarily focus on fact number (2).

When criticizing traditional phenomenology Dennett complains that we are unable to agree what is and is not phenomenologically manifest (ibid, p. 261). We suggest that the reason that phenomenologists have not discovered one true description is because different theorists have differently structured minds. However, for the purposes of this paper we will primarily evaluate Dennett’s kind of mind, and compare it with the minds of other kinds of thinkers.  We will examine some of the claims Dennett makes in his book Consciousness Explained to illustrate this point.

In Consciousness Explained Dennett asks his readers to perform the following experiment:

When you close your eyes, imagine, in as much detail as possible, a purple cow.

Done? Now:

(1)   Was your cow facing left or right or head on?

(2)   Was she chewing the cud?

(3)   Was her udder visible to you?

(4)   Was she a relatively pale purple, or deep purple?

(Conscious Explained p. 27)

Dennett speculates that most people would be able to answer this question, and that if they were not able to answer it they probably did not try to call up the image but merely said the words ‘call up a purple cow’ to themselves. This fact seems to indicate that Dennett is himself capable of calling mental images to mind. This is interesting because in his discussion of Shakey the robot he argued that people only have mental images metaphorically, yet in the above discussion of purple cow he seems to admit that he can form mental images. However, a closer look at Dennett’s views shows that he is not in fact claiming to be able to form mental images. Dennett claims that there are many things which we believe we experience but which, upon closer reflection, we do not experience. So, for example, in Consciousness Explained pp. 53-55 he discusses our visual field. He notes that from the point of view of naive reflection our visual field seems to be uniformly detailed from the centre out to the boundaries. However he asks the subjects to perform an experiment of trying to distinguish cards at the periphery of one’s visual field. People always fail this test. Dennett takes this fact to establish that while it may seem to people that they have certain experiences, experimental studies can reveal otherwise.  He views the case of imagery as similar to the case of the visual field. It often seems to people that they have experiences of mental images; however a closer inspection reveals this to be wrong. In his Content and Consciousness Dennett speaks of a mental image of a Tiger. He notes the following:

Consider the Tiger and his Stripes. I can dream, imagine or see a striped tiger, but must the tiger I experience have a particular number of stripes? If seeing or imagining is having a mental image, then the image of a tiger must- obeying the rules of imaging in general-reveal a definite number of stripes showing, and one must be able to pin this down with such questions as ‘more than ten?’, ‘less than twenty?’. If, however, seeing or imagining has a descriptional character, the question needs no definite answer. Unlike a snap shot of a tiger, a description of a tiger need not go into the number of stripes at all; ‘numerous stripes’ may be all the description says. (Dennett: Content and Consciousness p.154).

Dennett’s argument is that people cannot count the stripes of supposed mental images of tigers. Therefore purported mental images just seem to be images and upon closer inspection they are revealed as mere descriptions of a tiger[3].  Dennett is doing a similar thing when asking people to call up a mental image of a cow. He admits that it seems like we can do so, however he asks us further questions which aim to cast doubt on the reliability of our introspections.

Dennett follows up his question about an imagined purple cow by asking us to imagine a yellow cow. He asks us to answer the same four questions about the mental image of the yellow cow. He then asks a fifth question: (5) What is the difference between imagining a yellow cow and a purple cow? Dennett argues that since nothing in the brain is purple or yellow, and even if it were we have no internal eyes to see such creatures, then such claims must be mere seemings rather than something we actually experience. Some philosophers have argued that since we can experience a yellow or a purple cow in our mind’s eye and these experiences by Leibniz’s law cannot be identical with brain states which are neither yellow or purple, and they are not events in the external world, then they must be in a different realm, the realm of the non-material mind. Dennett’s discussion of Shakey above is his way of doing away with this problem. The thought experiment of Shakey is designed to show a creature who can report on mental states which it does not experience. Dennett argues that as a matter of empirical fact we are such creatures. We will argue on the contrary that Dennett is, as Nietzsche says, giving an unconscious autobiography of his own mental type. We will develop this point further when discussing Dennett’s take on Kosslyn’s experiment on mental rotation later in this paper.

Later in Consciousness Explained Dennett makes further claims about introspection which give us a deeper insight into the type of mind he has. He notes  the strange fact that since Descartes’ time, philosophers have been trying to describe the basic facts of their internal experiences in as clear a manner as possible, and yet have no clear results which all agree on. He considers briefly the possibility which we are developing here that people have different types of mind and that they are accurately describing their own types of experiences. But he never really considers any evidence for or against this possibility. Instead he explains the lack of agreed upon introspective results in terms of the fact that our internal world is so vague that over-theorising our vague internal experiences will result in us unwittingly creating fictional internal worlds. It is important to note that Dennett does not offer much evidence to support his view that introspection is theorising on vague, practically non-existent internal world. He points out that some experiments prove that people are wrong about aspects of their perceptual experiences which they have made confident assertions on. However, the fact that people can be wrong about their perceptual experiences does not really prove anything about their introspective experiences. It does show that people can at times be gullible theorisers, but what this fact points to is that we should be careful when introspecting and perceiving, not that introspection and perception are bad tools.

So why does Dennett deny that the reason people have different descriptions of their internal worlds is because they have different types of minds? We argue that he is guilty of committing the Typical Mind Fallacy. Dennett makes the following claim:

I suspect that when we claim to be just using our powers of inner observation, we are always actually engaging in a sort of impromptu theorising-and we are remarkably gullible theorizers, precisely because there is so little to ‘‘observe’’ and so much to pontificate about without fear of contradiction. (ibid, p.68)

The key claim to note from above is Dennett’s claim that ‘there is so little to observe’ when we introspect.  We argue that we should take Dennett at his word here. When he introspects there are no images to observe because he, like 5% of the population, is a non-imager.  Both of the authors of this paper report that they find little to observe when they introspect. When they introspect they experience virtually no mental images; almost all thought is in terms of internal talking, although from time to time they do experience some mental images. So from an introspective point of view they can understand Dennett’s more extreme position, that of having no mental images. We can see why Dennett would believe that introspection involves a kind of unconscious theorising. The important point is that while Dennett and David Berman and David King are drawing from their own experiences, the reports which have been gathered from introspective psychologists indicate that not everybody shares the same experiences as Dennett. We argue, and will establish empirically, that people who are eidetic imagers will find Dennett’s claims absurd. They are right to view Dennett’s claims absurd from the point of view of their own minds. However, they would be wrong to think that Dennett is incorrect about his own mind; Dennett’s mistake was unconsciously generalising from his own mind to all minds. We will illustrate in the next section that Dennett’s views on mental imaging are not only derived from his own experiences but are also derived from theoretical considerations.

Section 3: Dennett on Rotating Mental Images

In chapter 10 of Consciousness Explained Dennett discusses some experiments which have been done on the rotating of mental images. The experimental data he discusses is extremely interesting. The first experiment is noteworthy because one of the authors (Dave King) was unable to perform the experiment at all because of a difficulty in calling up mental images. In the experiment subjects are asked to check if two objects beside each other but at different angles are the same shape.

The subjects typically reply that they are. When asked how they know, they reply that they rotated one of the objects in their mind’s eye and superimposed it on the other. Now obviously this could be explained away as being just a way of talking which the subjects have. However, further experimental research casts doubt on this view. Subjects were asked to rotate an object which was at a 45-degree angle to an object beside it to see if they were the same size and shape. Subjects were then asked to rotate an object at a 90-degree angle to the object beside it to see if they were the same size and shape. The idea was that, assuming that subjects rotated the objects in their mind’s eye at the same speed, then this would take twice as long. The experimenters (Kosslyn, 1980) found that it did indeed take twice as long to rotate an object through a 90-degree angle as through a 45-degree angle. This would appear to show that some people do indeed think with mental images.

Dennett is primarily worried about explaining these facts in a manner which does not commit him to the existence of a Cartesian Theatre. He gives a theoretical account of how we can explain this experiment in terms that do not presuppose the existence of the Cartesian Theatre.  When discussing the human ability to rotate mental images Dennett compares it with the ability of a CAD system to rotate various shapes because of its Cathode Ray Tube. He asks us to imagine a blind engineer using a CAD system as a prosthetic device to rotate images. The blind person uses braille to communicate with the CAD system.  This CAD system which he calls Mark 1 CADBLIND will be fitted with a computer vision system, complete with TV camera aimed at the Cathode Ray Tube. This computer visual system will be able to read the shapes on the Cathode Ray Tube and communicate the results to the blind engineer.  Dennett goes on to argue that if we can design a Mark 1 CADBLIND system, then developing a Mark 2 CADBLIND will be easy:

We just throw away the CRT and the TV camera looking at it, and replace it with a simple cable. Through this cable the CAD system sends the Vorsetzer a bit-map, the array of zeros and ones that defines the image on the CRT. (ibid p 291)

So after a CAD system performs its calculation it passes the information to the Vorsetzer (visual system) and the information is translated. Dennett notes that we are not really saving  much in terms of the calculations which still need to be done, we are just getting rid of some unnecessary hardware. He therefore proposes a Mark 3 CADBLIND system which actually saves on calculation:

So our Mark 3 CADBLIND will exempt itself from huge computational tasks of image-rendering by taking much of what it “knows” about the represented objects and passing it on to the Vorsetzer subsystem directly, using the format of simple codes for properties, and attaching “labels” to various “places” on the bit-map array, which is thereby turned from a pure image to something like a diagram. Some spatial properties are represented directly-shown-in the (virtual) space of the bit map, but others are only told about by labels. (ibid p293).

Here Dennett notes that such cancelling out only works if the two systems that need to communicate speak the same language. There would be a problem if the information which the CAD system had was not in a format that the Vorsetzer could use. Dennett argues that given that the human brain was designed by the tinkering processes of natural selection we should expect to find such difficulties of communication between the different areas of the brain. He speculates that since diagramming is an effective way of communicating the brain may use such processes. He argues as follows: ‘Diagrams do indeed amount to re-presentations of the information-not to the inner eye, but to the inner pattern recognition mechanism that can also accept input from the outer eye (ibid, p.293).

Twenty years after Dennett wrote Consciousness Explained experiments by Kosslyn have confirmed that two-thirds of the brain area that is lit up when a person visualises (the occipital lobe) is also lit up when a person forms a mental image.

Not everyone accepts Kosslyn’s results (see for example Pylyshyn 2003), however a substantial majority do. It has furthermore been shown that when people form mental images  geometric patterns of these images are formed in the brain (Kosslyn et al. 2006). Experimental work has also been done on monkeys which shows that topographic images are formed on their visual cortex when they view patterns of lights which are flashed to them by experimenters. (Kosslyn et al., 2006)

Notably Dennett, despite arguing for an architecture similar to Kosslyn’s, draws a different conclusion. Kosslyn believed that these experiments showed that people rotated such images in their mind’s eye. Dennett, however, drew an entirely different conclusion:

Or in other words, it is asking whether any of the pixel-demons want to tell it, “color number 37 here”. All the red is gone-there are only numbers in there. In the end, all the work in a CADBLIND system must be done by arithmetic operations on bit strings, just as we saw at the lowest level of Shakey in Chapter 4. And no matter how quasi-pictorial or imagistic the processes are that eventuate in the Vorsetzer’s verbal answers to questions, they will not be generated in an inner place where the lost properties (the properties merely “talked about” in the bit map) are somehow restored in order to be appreciated by a judge that composes the answers…People are not CADBLIND systems. The fact that a CADBLIND system can manipulate and inspect its “mental Images” without the benefit of a Cartesian Theatre doesn’t by itself prove that there is no Cartesian Theatre in the human brain, but it does prove that we don’t have to postulate a Cartesian Theatre to explain the human talent for solving problems in “the mind’s eye”. There are indeed processes that are strongly analogous to observation, but when we strip down Kosslyn’s CRT metaphor to its essentials we remove the very features that would call for a Cartesian Theatre.  (ibid, p. 297)

Dennett is in effect arguing that while it may seem to some people that they are forming mental images in their mind’s eye, closer inspection reveals that they are doing no such thing. He is trying to explain away people’s verbal statement that they are forming mental images. Most people find Dennett’s view on this topic a bit strange. They can understand how Dennett can treat other people’s verbal reports using the heterophenomenological methodology. Such an approach treats people’s verbal reports as being epistemically on a par with the verbal reports of Shakey the robot. However, the problem which most people have with Dennett’s characterisation of mental images comes from the first-person point of view. Independent of my verbal reports to others, if I see a purple cow in my mind’s eye, then I see it. Consider by analogy a table that I see on front of me. It could be argued that despite appearances the table is really just a colourless collection of quarks and gluons. Nonetheless, despite this fact, I still see a table on front of me. Likewise, while my mental image of a purple cow may be caused by messages to my occipital lobe from the temporal lobe, what I see is a purple cow. Dennett’s denial of mental images is only explicable from a first-person point of view if he is in fact incapable of forming such images. If he experienced vivid images then surely he would find his own CAD, and Shakey thought experiments, less impressive than he does.

He goes on to ask us to perform the following introspective experiment:

Here is a simple test to remind us of how limited our imaging abilities actually are: In your mind’s eye, fill in the following three by three crossword puzzle, writing the following three words in the columns, starting with the left column: GAS OIL DRY. (ibid p.295)

He notes that if written on a page the words would pop out whereas in the mind’s eye they do not. Here he is guilty of the typical mind fallacy. He assumes that because he cannot do it, nobody can. He admits there may be variation in people’s abilities to do crossword puzzles in their mind’s eye, and even says that some people may be better imagers than others. However, in the next sentence Dennett reminds us that in the ultimate sense it is all tell and no show (ibid, p. 295). In other words, we do not really use mental images to solve the crossword puzzles. So here again we suggest that the fact that Dennett is a non-imager plays a massive role in the theoretical views he holds about the nature of mental imagery.

Oliver Sacks recently spoke about a patient of his doing crosswords in her mind’s eye. I will give the whole quote here, because it strongly contrasts with Dennett’s view:

I was reminded, when Lillian told me of this, of a patient I had seen in the hospital some years before, who overnight become totally paralyzed from a spinal cord infection, a fulminating myelitis. When it became evident that no recovery was forthcoming she fell into despair, felt that her life was over-not only the great things of life but the little familiar pleasures of each day, like doing the New York Times crosswords, to which she was addicted. She requested that the Times be brought to her each day so that she could at least look at the puzzle, get the configuration, run her eyes along the puzzle, get its configuration, run her eyes along the clues. But when she did this something extraordinary happened, for as she looked at the clues, the answers seemed to write themselves in the spaces. Her visual imagery strengthened over the next few weeks, until she found that she was able to hold the entire crossword and its clues in her mind, after a single, tense inspection, and then solve it mentally, at her leisure later in the day. This became a source of great solace to her, in her paralysis; she had no idea, she told later told me, that such powers of memory and imagery were available to her. (Oliver Sacks: The Mind’s Eye p. 24)

This example is obviously anecdotal, and a verbal report cannot be simply assumed to be a correct description of an experience. Nonetheless, since introspective reports, behavioural evidence and neuroscientific evidence do indicate that people have different ranges in their imaging abilities, we have no reason to doubt Sack’s patient.  We suggest that Dennett could benefit from reflecting on the variation of people’s mental imaging and that such reflection is necessary to stop him from unconsciously projecting his own type of mind onto all other people’s minds. One of the authors of the present paper (Dave King) tried Dennett’s introspective task of imaging crosswords. He described his experiences as follows:

When trying to imagine the crossword in my mind visual images played a limited role. I spelled the words out ‘’verbally’’ in my mind, and sometimes could see a glimpsy word in my mind’s eye. And I figured out what the words were by a process of memory, spelling out each word ‘‘verbally’’and then through memory figuring out what each horizontal word would be. Some imagery was present, I vaguely ‘‘saw’’ a square and red dots appear where I would imagine the words would be if in a crossword. Looking at each red dot seemed a physical experience; I could feel my eye balls move as I looked at the dots. The images did not seem to play any role in figuring out the words though as I said, images were present a lot of the time.

In an earlier paper co-author of this paper David Berman described his own experiences of trying to form a mental picture of a similar game:

Here is another similar test for image impairment. I say: let’s play noughts and crosses. You agree. But instead of your playing it on paper, I ask you to play it in your head or mind’s eye, using the a, b, c and 1, 2, 3 grid, as used by chess players. Here again the weak or non-imager is going to show his impairment; whereas the photographic and eidetic imager 10 could not only easily play noughts and crosses in his head, but (amazingly to me) actual chess as well. Of course, working out the place of a letter in a word, or playing mental noughts and crosses, is not a task that comes up that often. But what does arise rather more normally is a question like:‘Dad, how do you spell the word . . . ?’ For a low imager, that isn’t difficult if the word is short. But it is extremely difficult, as I know from my own experience, if the word is long, for then it is hard to be sure which letter one has reached by the middle or end of the word. Again, it is easy enough to do it if you can write the word down, and that is precisely what the strong imager can do, although in his mind’s eye. (Philosophical Counselling for Philosophers: p. 7)


The views of Dennett, Sacks’ patient, David Berman and Dave King are all different. Dennett, because he is, we believe, a non-imager, assumes that the differences result from theorising on our limited vague internal experiences.  We argue that he believes this because when he introspects he experiences no mental imagery and incorrectly assumes that all people have minds which are the same as his.

                    Section 4: Dennett as a  Heterophenomenological Subject.

Dennett discusses pain in Chapter 3 of Consciousness Explained; the chapter is called ‘A visit to the phenomenological garden’. When discussing pain Dennett asks us to perform the following introspective experiment:

Close your eyes and imagine that someone has just kicked you, very hard, in the left shin (about a foot above your foot) with a steel-toed boot. Imagine you almost faint, so nauseatingly sharp and overwhelming is the jolt of pain you feel. You imagined it vividly; did you feel any pain? Might you justifiably complain to me that following my directions has caused you some pain? Some find it somewhat disturbing, and others find it a rather enjoyable exercise of the mind, certainly not as unpleasant as the gentlest pinch on the arm that you could call pain. (Consciousness Explained, p. 60)

Dennett then asks the following two questions:

(1) Imagine you dreamt of experiencing the same kick in the shin that you imagined in your introspective experiment, and you woke up clutching your shin.  Are dreamed pains real pains, or a sort of imagined pain?

Dennett’s answer is that dream pains are at least something that we do not like having, whereas imagined pains are something nobody minds having. He then asks another question:

(2) Compare dream pains to pains that arise in you while you sleep, when you roll over and inadvertently twist your arms in an awkward position, and then, without waking up, without noticing it at all, roll back into a more comfortable position. Are these pains? Dennett is ambivalent in his answer to this question. He only seems to ask the question to illustrate how unclear the notion of pain is; that despite some philosophers thinking that qualia like pain are immediately known, the experience is in fact much more vague than we think.

To support our view that Dennett is a non-imager inadvertently committing the Typical Mind Fallacy we asked 12 people to do his introspective task. We aimed for a qualitative approach; we asked our participants to detail their experiences in as rich a manner as possible. We argue that such an approach is in keeping with the manner Dennett answered his own question. We sent our introspective tests to over 40 people and used the results from the 12 people who answered. Our aim is to merely illustrate the divergences which occur within our small sample. We think that the divergences are suggestive and warrant being further researched on a large scale and with more rigorous statistical analysis which takes account of age differences, sex differences, and cultural differences, educational level etc. For the purpose of this paper we argue that our small sample is enough to show that Dennett is wrong to generalise from his own type of mind to the structure of all minds.

Dennett’s manner of answering his own question combined with other studies reveals a lot about his kind of mind. He argues that imagined pain is something that people do not mind having, while dreamed pain is something people do mind having. Most of the people who answered our questionnaire did not agree with him on this. We also think that Dennett’s account reveals that his dream experiences are more real for him than his imagined experiences. This is significant because one of the authors of this paper, David Berman, has evidence that non-imagers typically have vivid dream experiences[4]. This fact was also observed by the psychologist Bill Faw[5]. So the fact that Dennett admits by implication that his dreams are more vivid and real than his imagination fits well with our claim that he is a non-imager.

So we argue there are four key facts which indicate that Dennett is a non-imager who is guilty of unconsciously assuming that all people have the same mental architecture as himself. (1) There is evidence of great philosophers making this mistake in the past[6]. This obviously does not prove that Dennett is guilty of making the same mistake; it is merely intended to establish historical precedence and to open up the theoretical possibility that Dennett may be guilty of the same mistake. (2) The fact that Dennett describes himself in terms of Shakey and Mark 3 CADBLIND, creatures who have no imagery whatsoever indicates that he, like them, has no mental imagery. (3) The study of Reisberg et al.(2002) indicates that non-imagers typically remain sceptics about mental imagery ability despite the overwhelming evidence in favour of the existence of mental imagery.(4) Dennett’s confession that he finds dreams pains more vivid than imagined pains is consistent with the findings of Faw (1997) and Berman (2009) that non-imagers typically experience vivid dreams.





                                     Section 5: Dennett: Are Dreams Experiences?

The psychologist Brian Faw has argued that typically people who are non-imagers experience vivid dreams. A similar pattern has been noted by David Berman in studies of people’s introspective abilities over the years. In his 2009 “A Manual of Experimental Philosophy” two of his subjects were non-imagers and both of them reported having strong dream experiences. One of his subjects Timo made the following reports to Berman:

I must confess that I cannot see any images in my mind, except when I fall asleep. But I can for instance freely wander around my childhood home in Vaasa, from room to room, and I can ‘see’ quite exactly what was there. I go back to that flat. But I do not have visual images which are comparable with those of dreams. I just remember’… I suspect that people call the pictures of imagination pictures even if they do not see any pictures. They conventionally call them so…David, believe me, I don’t see any pictures. I do the construction conceptually, and I remember what I thought in the (black) visual space. When I walk through my home, I use words. Believe me I see nothing except in some wide figurative sense perhaps. But no images. It is all about thinking not seeing. (A Manual of Experimental Philosophy: pp.63-66)

We can see from Timo’s report that his view of images as a type of metaphor is similar to Dennett’s view. Like Dennett, his report of dreams indicates experiences much more vivid than his non-imaging  abilities when awake. The other non-imager in Berman’s report made similar claims. Marek, who reported that he is a non-imager made the following claim about his dream experiences:


Yes I do have visual dreams, in fact the images (plus voices, of course) are so vivid that I try and do interact with the people in the dreams. So like Descartes predicted, I am fooled in my dreams and take them for reality. Sometimes though I can tell that I am dreaming when something very illogical happens. (ibid p. 70).


We have argued in this paper that Dennett was a non-imager and that like a lot of non-imagers he had vivid dream experiences. The evidence we cited above from Berman and Faw indicates that there is a correlation between non-imagery and strong dream experiences. We have also argued that Dennett’s discussion of the difference between imagined pain and dreamed pain further illustrated our point. He claimed that dreamed pain would be more vivid than imagined pain and this claim nicely fitted in with our empirical data that non-imagers have vivid dream experiences[7].

There are, however, two objections which can be made to our claim. (1) Our claim that Dennett has vivid dream experiences is inconsistent with our claim that he denies that mental images exist. If Dennett does indeed experience mental images while dreaming then this seems to undermine our claim that he denies the existence of images because he never experiences them. (2) Dennett has written two different papers about the nature of dream experiences, and his views on dream experiences do not, on the face of it, sit well with our view that Dennett has strong dream experiences. We will deal with Dennett’s theoretical papers on dreams first, and then after that answer question (1).

In Dennett (1978b) he aims to undermine the authority of the received view that dreams are experiences. The received view claims that dreams are experiences that occur during sleep,  and that these experiences consist of sensations, thoughts, images and feelings which are organised in a narrative form occurring somehow in awareness even though we are not consciously aware at the time. Dennett aims to cast doubt on the received view and to outline his view of experience and memory which he will incorporate into his physicalistic theory of consciousness.

Dennett notes that the received view is an empirical theory; and that there is some anecdotal evidence which goes against the received view. For example, people are often woken by an alarm which they hear in their dream; however they were undergoing REM hours before. So unless the dreamer has precognition then he cannot have dreamt of the alarm at the said time. Dennett claims that this shows that dreams are mere recollections, not something we actually experience. Dennett conjectures that science may show that dreams are like déjà vu, i.e. it only seems like we had them before (Dennett: Are Dreams Experiences p. 155). We may, he speculates, have dream memories constructed in our brains ready to be ‘recollected’ upon waking.

Dennett argues that the difference between the received view and his alternative (cassette) conjecture is that in the new view, unlike the old view, there is nothing it is like to have a dream (though there is something it is like to remember a dream). For Dennett there is nothing to distinguish between the received view and the cassette alternative. He notes that only our common sense intuition counts against his alternative, although he argyes that common sense intuitions should not decide a scientific matter.

Dennett points out that one criterion which counts against the received view is ordinary language. We would normally say that to experience something we must be conscious of it. On the received view we supposedly have unconscious experiences. So this is reason to prefer the alternative view because it is less vulnerable to conceptual confusion than the previous view. He offers a further thought experiment which he claims shows that dreams may not be experiences. Suppose a person as a result of some brain twinge claims to have seen a ghost at 12pm. Suppose further that we are with the person at 12pm and between 12pm and 12.14pm he behaves normally and gives no indication of seeing a ghost. Now suppose that at 12 15pm this person gets agitated and claims that he has seen a ghost at 12pm. Since the person gave no indication of this at 12pm we will assume that the person at 12 15pm, as a result of brain injury, has a hallucination of a recollection (ibid, p.20). Dennett claims that this hallucination story can have effects for the theory of dreams. All normal dreams do not get reported until after the fact. So we do not have sufficient evidence to say people are experiencing dreams. He makes an exception when it comes to nightmares because we have behavioural evidence in the form of increased heart rate, moans, cries. He claims that bad dreams would not be experiences because of lack of behavioural evidence at the time we claimed we experienced them, though he claims we could remember the dream in agony (ibid p.20).

He argues quite plausibly that if paralysis is only peripheral and more central processes are lit up, e.g. the visual system is lit up (which it is), this may count towards us saying that dreams are experiences. However, given that the visual area is lit up in blind sight and hysterical blindness this shows that we cannot say for certain that a lit up visual area means that we are having dream experiences. He notes that it is an open theoretical question whether dreams are experiences.

He argues that we will want to investigate four things if we are to construct an accurate theory of dreams.

(1)   The role of experiencing in guiding current behaviour.

(2)   Our current abilities to say what we are experiencing.

(3)   Our recollective capacities to say what we are experiencing.

(4)   The functional saliencies which emerge from empirical investigation.

So in effect Dennett argues that we cannot at present decide between the alternatives however, any alternative must be established empirically.

In his later paper “What is Dreaming for, if anything?” (2013) Dennett defends this Malcolm-type view on dreams against an alternative proposed by psychiatrist Allan Hobson whose theory of the nature of dreams is a version of the received view. Dennett again offers a series of arguments which he thinks show that his view is as viable an alternative as the received view. The one area where he expands upon his earlier theory is by emphasising the theoretical possibility that linguistic competence plays a central role in developing our hallucinations of dream experiences.

It might be felt that Dennett’s view outlined in his two papers on dreams which indicates that he does not think that dreams are experiences is strong evidence that our view that he is a vivid dreamer is incorrect.  We, on the contrary, think that Dennett’s papers on dreams support our position rather than going against it.

It is worth noting that the empirical evidence that non-imagers have vivid dreams seems on the face of it to be a strange fact. If a person is incapable of having mental images then it is odd that they can have vivid dreams which would involve imagery. However, from a phenomenological point of view mental images are different than hallucinations. People never take mental images for the real thing (even eidetic imagers do not), whereas people typically confuse hallucinations for real experiences. The fact that dreams are typically confused with reality indicates that they have the quality of a hallucination rather than of imaging. Dennett, while claiming that dreams may not be directly experienced, does indicate that we experience our memories of them and that they are hallucinations of recollections of something we never experienced (ibid, p.20). But we experience the actual hallucination, and the hallucination of a bad dream can be remembered in agony (ibid, p.20). What this means is that although the hallucination is a false memory we still experience the actual hallucination and the hallucination can leave us in agony. So again this view is actually consistent with Dennett’s claim that imagined pain is not as severe as dreamed pain. We will not evaluate Dennett’s defence of Malcolm’s claim that dreams are not directly experienced because it is not relevant to this paper and Dennett admits it is only a theoretical possibility. It is enough to note that nothing in Dennett’s paper is inconsistent with our claims that Dennett is a non-imager who has vivid dream experiences. Dennett, even in his paper’s on dreams, indicates that even if we consider dreams as hallucinated memories we can still experience these hallucinations in agony. This contrasts strongly with his claim that people cannot imagine real pain in his Consciousness Explained. It is consistent with Dennett’s claim that dreamed pain is something we mind having. It is also consistent with our view that Dennett is a non-imager who is a vivid dreamer. Dennett is guilty of assuming that his idiosyncratic psychology is representative of human minds. He is not alone in this. Berman has argued convincingly that Locke and Berkeley were also guilty of it. We think it is worth researching whether other thinkers in the philosophy of mind like Nagel and McGinn are guilty of making a similar mistake. Philosophers like Dennett and Quine have argued that philosophy should become more naturalistic and empirical. We aim to extend this empiricism to the study of the individual minds of philosophers and how their minds unconsciously influence their philosophical theories.

Section 6: Real Seeming

In his Content and Consciousness Dennett talks about the fact that while it may seem to some people that they have mental imagery, closer examination reveals that what they call a mental image is really only a description. Twenty five years later in his Consciousness Explained when discussing Kosslyn’s experiments on mental imagery, Dennett noted that despite appearances mental imagery is really all tell and no show. One curious thing about Dennett’s view is the fact that he claims that despite the way things seem mental imagery is really a mental description. What is strange about this view is the fact that a description can seem like an image. This is a very odd way to understand the word ‘seem’. A paradigm example of an x seeming like it is a y is given by Descartes. He talks about how a stick which is in transparent water will seem to be bent because of light refraction, though in reality the stick is not bent. What Descartes means by the words ‘seems to be’ is the same as ‘appears to be’; and this of course is the standard meaning of ‘seems to be’. However, even to a weak imager like me, it is patently obvious that mental images are nothing like mental descriptions. If something really seemed (as in appeared) to me like an image, then it follows that I would have an experience of something image-like, and a description is in no way image-like. This leads to the question of what Dennett could possibly mean when he admits that it atleast seems to some people that they experience mental images?

In Consciousness Explained Dennett carefully explains what he means by the word ‘seems’; evaluating his views on this will help clarify his strange beliefs about the nature of images. In Chapter 5 Section 5 of Consciousness Explained Dennett discusses the colour phi experiment[8]. In this discussion he makes explicit his strange views on the nature of ‘seeming’.  The colour phi phenomenon is apparent motion. We see examples of it on our television screen every day, where a series of still pictures are flashed one after the other at a certain speed to create the illusion of motion. Dennett discusses a simple example of colour phi where two spots separated by as much as 4 degrees of visual angle are flashed rapidly, creating the illusion of one spot moving back and forth rapidly (ibid, p.114). Kolers and Grunau (1976) did a phi experiment with two dots, one red and one green,  flashing on and off. This gave the illusion of a red spot starting to move and changing colour to green mid-passage. Since the red dot is not moving and does not turn into a green dot we need to ask what is going on with this illusion. As the red dot moves we see it turn green as it moves towards its destination. The question is: how do we see the dot turn green before we actually see the green dot? One might think that the mind must first register the red dot and then the green dot and after that point the apparent motion must be played before the mind’s eye. To think this, way Dennett warns, is to demonstrate that one is still in the grip of the metaphor of the Cartesian Theatre (ibid p. 115).

To loosen the grip of this picture on our minds Dennett discusses two fictional processes which one could attribute to the brain. He calls them the Orwellian Process and the Stalinesque Process.  The Orwellian Process occurs when I misremember something because my brain tampers with what I remember so that I no longer remember accurately. The Stalinesque Process is where the brain projects a false picture of reality into the mind’s eye. Dennett notes that while a distinction between Orwellian and Stalinesque processes makes sense in the external world it is an illusion to assume that it makes sense as an explanation of what is going on at the level of the brain.

Let us think of both of these processes as they apply to the case of colour phi.  In the Orwellian case we did not see the apparent motion; our brain merely revised our memory and informed us that we did see the motion. In the Stalinesque case we unconsciously registered the two dots and afterwards our brain created a kind of mock event for us to watch. Dennett notes that once we give up the notion of Cartesian Materialism, we will see that there is no answer to the question of whether the Orwellian or Stalinesque process took place. He puts things as follows:

So here is the rub: We have two different models of what happens to the color phi phenomenon. One posits a Stalinesque “filling in” on the upward, pre-experiential path, and the other posits an Orwellian “memory revision” on the downward, post-experiential path, and both of them are consistent with whatever the subject says or thinks or remembers…Both models can deftly account for all the data-not just the data we already have, but the data we can imagine getting in the future (ibid, pp. 123-124)

So there is no fact of the matter which can decide between the two different stories. Dennett argues that the reason that we cannot decide between the two accounts is that there is really only a verbal difference between them. With Dennett’s rejection of Cartesian Materialism and his alternative multiple-drafts theory of consciousness we can no longer draw a non-arbitrary line to decide when an event becomes conscious.  There is therefore no fact of the matter as to whether an Orwellian or a Stalinesque process took place.

When Dennett claims that we cannot decide between the Stalinesque and Orwellian alternatives we are left with what seems like a mystery. In the external world a red object is not really moving and turning into a green object, yet Dennett is also denying that a Stalinesque show trial is played before the mind’s eye.  So the obvious question is: where does the movement of the ball occur? Dennett’s answer is that the ball does not move and turn green it only seems to. However, to say that a ball seems to move is to say that people have an experience of the ball moving. And this leads us back to our original question: what generates this experience, and how is it generated? Dennett thinks that this is a bad question because the brain does not need to create an experience of the ball moving; it merely has to form a judgment that such movement occurred:

The Multiple Drafts model agrees with Goodman that retrospectively the brain creates the content (the judgment) that there was intervening motion, and that this content is then available to govern activity and leave its mark on memory. But the Multiple Drafts model goes on to claim that the brain does not bother “filling in” the blanks. That would be a waste of time and (shall we say?) paint. The judgment is already in, so the brain can get on with other tasks. (ibid, p. 129)

This claim of Dennett’s is extremely strange. He is claiming that the brain judges that the motion occurred. However, as a matter of fact, we do not experience the motion; we only think we do.  The obvious reply to this is to categorically state that I do experience the movement and I judge that the movement occurred based on this experience. In other words, the experience is prior to the judgment. The experience is not of a fact in the external world (where no movement occurred), it is rather an experience of a person’s subjective qualia. When Dennett denies that it is the experience that leads to the judgment, he is leaving the phenomenal experience out and is focusing entirely on access consciousness.

The claim that Dennett is denying the existence of phenomenal consciousness is on the face of it an incredible claim. So before proceeding it is important that we show that this is indeed Dennett’s position. To demonstrate that this is indeed Dennett’s position I will now provide detailed quotes from him to make clear his position.  When discussing phenomenal space Dennett makes the following claim:

Now what is phenomenal space? Is it a physical space inside the brain? Is it the onstage space in the theatre of consciousness located in the brain? Not literally. But metaphorically? In the previous chapter we saw a way of making sense of such metaphorical spaces, in the example of the “mental images” that Shakey manipulated. In a strict but metaphorical sense, Shakey drew shapes in space, paid attention to particular points in that space. But the space was only a logical space. It was like the space of Sherlock Holmes’s London, a space of a fictional world, but a fictional world systematically anchored to actual physical events going on in the ordinary space of Shakey’s “brain”. If we took Shakey’s utterances as expressions of his “beliefs”, then we could say that it was a space Shakey believed in, but that did not make it real, any more than someone’s belief in Feenoman would make Feenoman real. Both are merely intentional objects. (Ibid, pp. 130-131)

The above passage is very instructive. It speaks to our topic of mental images and again shows that Dennett thinks of them as theorists’ fictions. Furthermore, his invoking of Shakey, who despite its verbal reports is not experiencing any items in phenomenal space, shows that Dennett thinks that we, like Shakey, despite our verbal reports are not experiencing anything in phenomenal space. Dennett is claiming that our brains may tell us that we have such and such experiences, and as a result of this brain report we form the judgment that we saw a red light move and turn into a green light. However, this judgment, despite appearances, is not grounded in a phenomenal experience.

It is worth noting that a lot of thinkers misinterpret Dennett’s claims on ‘seeming’ and colour-phi as indicating that he denies that we experience colours. This is not the case. Dennett’s arguments above only apply to colour hallucinations, Dennett tells a different story about how we perceive colour in the external world.

To understand Dennett’s views on colours it is helpful to think in terms of the primary quality distinction. One of the central motivations for claiming that the world is in your head is the existence of secondary qualities. When one looks at a beautiful garden one sees a variety of different colours: like the bright yellow sun-flower, the green-grass, the multi-coloured butterflies, the blue-sky and the bright-yellow-sun. Since the seventeenth century people like Galileo and Locke have been telling us that colours do not exist in the mind independent world. Colours are effects of light reflecting off objects and hitting our retinas in a variety of different ways. The majority of scientists since Galileo accept this dichotomy between primary and secondary qualities. Primary qualities are: Solidity, Extension, Motion, Number and Figure, while the Secondary qualities are: Colour, Taste, Smell, and heard Sounds. One consequence of accepting this picture is that the world is not as it reveals itself to us in our experience furthermore colours do not exist in a mind independent world. A further consequence is that we have a rich world which we experience consisting of taste, smells and colours but this world exists only within our minds. So on this view we have a subject, who is presented with certain experiences, and only some of those experiences correspond with a mind independent entity. The cartesian materialist who accepts this world picture has a difficult job on his hands. Nowhere in the brain is the experience of a blue sky located or a yellow daffodil located. He may be able to provide neural correlates for these experiences but he will not be able to point to the spatio-temporal location where the experience is and  the subject is located. So presumably the cartesian materialist will have to argue for a strong emergence thesis.

Rather than going down this road Dennett interprets the dichotomy between primary and secondary qualities differently than most contemporary theorists. Dennett has discussed the status of colours throughout his philosophical development: in particular in his 1968 Content and Consciousness, 1991 Consciousness Explained and in his 2005 Sweet Dreams: Philosophical Objections to a Science of Consciousness. I will now try to give a synoptic view of Dennett’s views on the topic of colours. In his first major discussion of colours he noted that while most believed that colours are secondary qualities and do not exist in the external world there are reasons to doubt this conclusion.

He centres his criticism in terms of language and what we are referring to when we use colour words. If we view colours as secondary qualities we are committed to the view that when I refer to something red I am referring to something within my mind. Now if we accept this view then when two people claim that they are referring to something red then they we don’t know whether they are referring to the same thing, as their inner experience of red may be different and, we cannot decide because we would have nothing public to compare their experiences to. Now if we do not want to admit the possibility that a teacher can never know when his pupil has actually learned the meaning of the word ‘red’ we must admit that the reference of colour words is to public observable entities.

One difficulty is that if one accepts the solution to the sceptical problem of colour reference by arguing that words refer to publically observable entities is that it leaves us with a conundrum of where we say that colours exist. They don’t exist in the mind independent world, and they don’t exist in the mind, and there is nowhere else to exist. So one is lead to the silly conclusion that colours do not exist anywhere. This conclusion must be wrong; and Dennett correctly notes that colour words refer to publically observable entities that one can be right or wrong about (Content and Consciousness, p. 161). So they seem to exist and seem to exist in a publically observable sphere.

For Dennett since colours are publically observable entities which we can be right or wrong about then they must be a property of the external world. This leaves Dennett with the question: what property exactly are they? He notes that colours are not reflective properties of surfaces which can be cashed out at a sub-atomic level. This is because:

“Although the sub-atomic characteristics of surfaces that reflect light predominantly of one wavelength can now be described in some detail, those different types of surface do not correspond neatly to the colours we observe things to be.” (ibid, p. 161)

Also different wavelengths of reflected light can cause the same colour experience in a person. So the job of characterising what property colours actually are is more complex than one might assume. Dennett notes that when a person is referring to red we will need to cash-out what property they are referring to in terms like: The person is referring to the reflexive property of x or y or z…(and the disjunction associated with one colour might be very long).

Dennett asks us: if the disjunction of properties which can be associated with a person’s experience of colour have little in common with each other, are we driven to the conclusion that colours do not exist? To think through this possibility considers colour blind people who have poorer discriminative capacities than us and a hypothetical alien who has colour discriminative capacities which are greater than ours. He notes that we would say that the colour blind man who may say that ripe (red for us) apples and grass are the same colour is suffering from a cognitive illusion. On the other hand if an alien had greater discriminative capacities than us so that it would constantly see things as changing colour, we would also say that he was experiencing colour illusions. This is because the meaning of colour terms is defined in terms of OUR discriminative capacities; which means that WE judge certain things in the world to be red, green etc. So relative to our form-of-life these other people would be suffering from a form of cognitive illusion.

Dennett concludes with the following statement:

Colour is not a primary physical property like mass, nor is it a complex of primary properties, a structural feature of surfaces. Nor again is it a private ‘phenomenal’ quality or an ‘emergent’ quality of certain internal states. Colours are what might be called functional properties. A thing is red if and only if when it is viewed under normal conditions by normal human observers it looks red to them, which only means: they are demonstrably non-eccentric users of colour words and they say, sincerely, that the thing looks red. There saying this does not hinge on their perusal of an internal quality, but on their perception of the object, their becoming aware that the thing is red (ibid, p.163)

I am not really sure whether Dennett really manages to avoid the problem of where the experience of red is located. However it should be obvious that he is not denying that colours exist, rather he is claiming that they are not paraded in a Cartesian Theatre.

In Part 1 of this paper we have shown that Dennett is denying that mental images exist. He treats us as robots like Shakey whose programming forces us to judge that we experience mental imagery. He explains away the fact that people report that it seems to them that they experience mental imagery by denying that ‘real seemings’ exist. Dennett’s account fails. He cannot explain why some people report experiencing vivid mental imagery while others report that they do not. His account fails to account for heterophenomenological reports of ordinary human subjects. Furthermore, when judged from a heterophenomenological point of view, his own reports are consistent with those we would expect from a non-imager. So we conclude that when it comes to mental imagery Dennett is guilty of making the Typical Mind Fallacy.

In our introduction we discussed Berman’s Typology: which divided philosophers into three main types. We argued that Dennett was a combination of Type 2 and Type 3 thinkers. We have seen in Part 1 that Dennett denies that things like colour-phi phenomenon, afterimages, and mental images are presented before the mind’s eye. This denial is consistent with the type of claims made by William James. Dennett’s denials are strong indications that he is not a Type 1 Mentalistic Thinker like Descartes; rather, he is like William James, a Type 2 Bodily Thinker. In Part 2 we will show that while Dennett is primarily a Type 2 Bodily Thinker he is also to a lesser extent a Type 3 Socio-linguistic Thinker. We will show this by evaluating Dennett’s views on the nature of pain.


            Dennett primarily explored the nature of pain in the early days of his philosophical development between the years 1968-1978. His views on the subject of pain are subtle and understanding his views on the topic will give us an insight into the nature of Dennett’s type of mind.  We will begin by first discussing his views on pain as developed in his 1968 doctoral thesis Content and Consciousness. We will then discuss his later views as developed in his “Why You Can’t Create a Computer That Feels Pain”. Once this is done we will show what his views reveal about the nature of Dennett’s mind.

On page 6 of Content and Consciousness Dennett denies the identity theory. He argues that pains are too different from brain states for one to be identified with the other. He further cites Putnam’s argument that the identity theory must be false because it leads to absurd counter- intuitive results. Thus, if identity theory were true, then this would lead to the absurd claim that only creatures with identical chemistry could think the thought that ‘Spain is in Europe’; this is surely false. Clearly a robot or an alien could think the thought that ‘Spain is in Europe’.  So Dennett argues that the identity theory is false. However, without the identity theory we are faced with all of the problems of dualism (how do the mind and body interact etc.). He argues that a lot of people are drawn to the identity theory because it seems less problematic than the intractable dualistic picture. Nonetheless he argues  that identity theory, like dualism, is false, so we need a new way of dealing with these problems.

Dennett’s suggestion is a kind of blending of Ryle and Quine. Ryle had long noted that it was a category mistake to try and reduce mind talk to brain talk because they belong to different logical types. Critics of Ryle had long noted that he never really spelled out what he meant by ‘different logical levels’ and things belonging to different ‘conceptual categories’. Dennett aimed to make sense of talk of the mental and the physical belonging to different categories by appealing to Quine’s philosophy of language as spelled out in Word and Object. Quine had noted that certain phrases, such as ‘for-the-sake-of’, are not translatable into the syntax of quantification, so had no clear identity conditions which would force us to be ontically committed to the existence of ‘sakes’. So, for example, we can say things like ‘for-the-sake-of-john’, but we cannot say whether there is one ‘sake’ or two. We cannot say of ‘sakes’ that they have attributes such as being ‘a good sake’, or ‘a big sake’ etc. Dennett argues that despite appearances, terms like ‘sakes’ are non-referential.  Confusing terms which are non-referential with referential terms can result in a series of intractable problems.  To see this he asks us to think of the term ‘voice’. ‘Voice’ is used in various different phrases and these phrases give the appearance that ‘voice’ refers to an entity in the world. So when, for example, it is said that ‘John has lost his voice’, if one is an identity theorist one will assume that ‘voice’ is identical with some physical process going on in John, that the word ‘voice’ refers to a physical process x. There are problems with this view of ‘voice’. Suppose one were to identify John’s voice with a particular fact about John’s vocal cords. This identity is not complete; there are digital recordings of John’s voice which will survive even after John has died. Given that the recordings are all in different mediums it makes little sense to identify ‘voice’ with vocal cords, and CD recordings, and records: all these various things have in common is a disposition to cause certain experiences in normal people. Do we say that ‘John’s voice’ denotes various different concrete objects, so that ‘John’s voice’ is a universal? Do we say that ‘John’s Voice’ is somehow identical with all the various instantiations? If so then it is hard to make sense of claims like ‘John’s voice is strained’, ‘John has lost his voice’ etc. When we say that ‘John has lost his voice’ do we mean that his vocal tracts are damaged, and that he has lost all recordings of his voice? Dennett claims that such silly questions can be avoided by accepting that ‘voice’, like ‘sakes’, and ‘dints’ are non-referential terms. They are ways of talking which are useful in some contexts but which do not carve nature at its joints. He suggests that we can treat ‘the mind’ in a similar way to ‘sakes’ and ‘voices’, useful ways of talking which do not pick out entities in the external world.

The concept of pain is one of the central concepts for any theory of the mind. Some people have used the concept of pain as a clear instance of identity theory.  To be in pain state x is to be in neural state y. Dennett notes that this position has its difficulties because pains are nothing like brain states. If pain state x has intrinsic properties which are not shared by neural state y, then by Leibniz’s law they cannot be the same thing. So Dennett admits that brain states and pain states are not the same thing. Since he does not want to defend any kind of emergent dualism claims he instead decides to explain away the problem by analysing away ‘pain’ talk as being non-referential, just like ‘voices’, ‘sakes’ and ‘dints’.

Dennett notes that if we try to explain the nature of pain at the personal level we will run into a series of cul-de-sacs. (1) We cannot analyse away the notion that ‘I am in pain’.  If someone asks me how I know I am in pain, or how I can tell pain from non-pain no real explanation is possible. I could say that pain hurts but non-pain doesn’t, however this merely relies on the unanalysed notion of ‘hurt’, which will presumably have to be explained by invoking ‘pain’, so our purported explanation will be circular. The truth is that we just do discriminate pain from non-pain. It is automatic, and we follow no explicit theory when judging that we have experienced pain. (2) At the personal level we likewise just automatically locate our pains; again we do not follow an explicit theory when doing this.  (3) We cannot answer the question of what it is about painfulness that makes us want to avoid it. Saying we want to avoid pain because it hurts is no answer to the question, because this raises the unanswerable question of why we want to avoid being hurt? So because of the three reasons just noted Dennett correctly notes that we cannot explain pain at the personal level. Any such purported explanation will of necessity presuppose the undefined notion of pain. Since explanations at the personal level have failed, Dennett notes that perhaps we should try to explain things at the sub-personal level. However, he correctly notes that such a sub-personal explanation will necessarily involve a changing of the subject; at the sub-personal level we will no longer be speaking about pain:

But when we abandon the personal level in a very real sense we abandon the subject matter of pains as well. When we abandon mental process talk for physical process talk we cannot say that the mental process analysis of pain is wrong, for our alternative analysis cannot be an analysis of pain at all, but rather of something else-the motions of human bodies or the organization of the nervous system. Indeed, the mental process analysis of pain is correct. Pains are feelings, felt by people, and they hurt. People can discriminate their pains and they do so not by applying any tests, or in virtue of any describable qualities in their sensations. Yet we do talk about the qualities of sensations and we act, react and make decisions in virtue of these qualities we find in our sensations. Abandoning the personal level of explanation is just that: abandoning the pains and not bringing them along to identify with some physical event. The only sort of explanation in which ‘pain’ belongs is non-mechanistic; hence no identification of pain with brain processes makes sense… (Content and Consciousness, p. 103)

So Dennett is arguing that if we give a personal level explanation we will end up with a vacuous explanation, while we can give a sub-personal explanation which will be a real explanation, but will not deal with the personal level phenomenon of pain. Since the personal level explanation is a non-explanation we need an explanation at the sub-personal and this, of course, is not an explanation of the common sense term ‘pain’. So Dennett is an eliminativist about pain when it comes to scientific explanation.  However, it seems to be almost incoherent to claim that Dennett is an eliminativist about pain. In the above quote he claims that at the personal level people can say truly that they are in pain and that the pain is awful. So Dennett could be accused of making contradictory claims about the nature of pain: at the scientific level he is acting as an eliminativist about pain, whereas at the personal level he admits that pain exists and is awful.        On pain of admitting true contradictions into his theory Dennett needs to decide which of the two levels of explanation is correct. Dennett, as a good naturalist, sides with the scientific level of description and aims to explain away the personal level. He begins his attack on the personal level ten years later with his famous 1978 paper “Why you can’t make a computer that feels pain”.  In that paper Dennett showed that the personal level concept of pain was internally contradictory so could be eliminated without a guilty conscience.  While in his earlier discussion of pain in Content and Consciousness Dennett simply claimed that ‘pain’ was a non-referential term; using pain talk was a way of subjects organising their experiences. However, such ‘pain’ talk will disappear when one is describing organisms in a scientific manner.

Dennett’s offers the following physicalistic explanation of what is known at the personal level as pain:

When a person or animal is said to experience a pain there is afferent input which produces efferent output resulting in certain characteristic modes of behaviour centring on avoidance or withdrawal, and genuine pain behaviour is distinguished from feigned pain behaviour in virtue of the strength of the afferent-efferent connections-their capacity to overrule or block out other brain processes which would produce other motions. That is, the compulsion of genuine pain behaviour is given a cerebral foundation. (ibid, p. 106)

Immediately on reading the above passage the reader will object that Dennett is not talking about actual pain, he is rather talking about pain behaviour. Dennett anticipates this objection and replies as follows:

Now would this account of pain behaviour suffice as an account of real pain behaviour, or is there something more that must be going on when a person is really in pain? It might be supposed that one could be suddenly and overwhelmingly compelled to remove one’s finger from a hot stove without the additional ‘phenomenon’ of pain occurring. But although simple withdrawal may be the basic or central response to such stimulation, in man or higher animals it is not the only one. Could any sense be made of the supposition that a person might hit his thumb with a hammer and be suddenly and overwhelmingly  compelled to drop the hammer, suck the thumb, dance about, shriek, moan, cry, etc., and yet still not be experiencing pain? That is, one would not be acting in this case, as on a stage; one would be compelled. One would be physically incapable of responding to polite applause with a smiling bow. Positing some horrible (but otherwise indescribable) quality or phenomenon to accompany such a compelled performance is entirely gratuitous. (ibid, p.106)

Here again we see Dennett pointing out that the term personal level concept of ‘pain’ is explanatorily barren, non-referential, and pointless for a scientific theory. He argues that if we want to explain ‘pain’ we must do it in terms of behaviour and nerve impulses, and of course in this case we will not be explaining the personal level concept of pain at all. This approach of Dennett’s has won him few adherents; it seems to leave out the most important aspect of pain, its horrible feeling. Few people could accept an explanation of pain which left out the actual feeling of pain. Dennett’s later attacks on the notions of qualia and real-seemings were further designed to attack the notion that pain was a thing which we experienced, as opposed to a mere seeming. However, prior to developing his arguments against qualia and real seemings, he attacked the very coherence of the personal level concept of pain.

In his “Why You Can’t Make a Computer That Feels Pain” Dennett developed a detailed thought experiment on whether it would be possible to build a computer which feels pain.  He begins by discussing a computer simulation of the nature of pain. He notes that this simulation of pain will not feel pain any more than a computer simulation of a hurricane will get us wet. His discussion of the pain simulation begins by noting that it will have to be able to make accurate predictions of pain behaviour. Thus it must predict that, for example, a person who has an anvil dropped on his unanaesthetized left foot will jump around in agony on his right foot, with a tear in his eye, screaming. He anticipates that people will be unsatisfied with such a programme as it leaves out the internal aspect of the behaviour. So he firstly improves on the programme by stipulating that it can predict the internal behaviour of c-fibres etc.  To the objection that the program still leaves out the feel of pain, he stipulates that the programme can report pain states and how vivid the states are:

Again we feed in

‘An anvil is dropped from a height of two feet on S’s left foot,’

And this time we get back:

There is a pain, P, of the in-the-left-foot variety in S; P begins as a dull, scarcely noticeable pressure, and then commences to throb; P increased in intensity until it explodes into shimmering hot flashes of stabbing stilettoes of excruciating  anguish (or words to that effect)….; S’s C-fibres are stimulated… (Why You Can’t Make a Computer That Feels Pain p. 418)

Dennett argues that there is no reason in principle that we can’t build a programme capable of making the above predictions. To people who argue that the above description leaves out actual pain experience, Dennett replies that the above is meant to be a simulation so it is no real objection that it leaves out actual pain, any more than it is an objection to a simulation of a hurricane that it does not make you wet.

He argues that another project which the skeptic may ask for is the building of a machine which can actually experience pain. This is a machine which does things in its environment as opposed to merely simulating them. At this point Dennett discusses our old friend Shakey. He argues that what the skeptic actually wants us to do is build a Shakey type robot which can actually feel pain.

Dennett argues that to build our robot we will need to instantiate our simulation/ theory of pain into the robot’s circuits. We will have to modify the theory in ways that connect with the robot’s motor abilities. So it is easy enough to build a robot that can shed a tear. It is obviously a much more complicated matter to build a robot with sufficient motor control to dance around on one foot when struck on the foot. However, none of this is a problem in principle. It just requires complex engineering. Such a robot would have pain behaviour and the ability to report that it was in pain, Dennett even argues that such a robot will have the ability to detect pain intensity:

But what about the rest of our earlier simulation? What happens to the hot flashes and dull throbs mentioned in our descriptive program? These parts of the output we transform into labelled flashing lights and leave them that way: sometimes the ‘dull throb’ light is on (blinking slowly if you like) and sometimes the ‘hot flash’ light is on. If the skeptic insists on more verisimilitude here, what can he be asking for? Remember that these lights are not blinking randomly. The ‘dull throb’ light goes on only at appropriate times, the robot can then say ‘there is a dull, throbbing pain’ and the other apposite side effects of dull, throbbing pains are presumed to be arranged to coincide as well. (ibid, p. 420)

Most people will not be happy with this robot analogue; they will argue that the robot is not actually experiencing pain despite the complex pain behaviour that it is engaging in. Dennett thinks that such people are looking for a robot which can have identical pain to human.  He argues that such people are holding AI researchers to very high standards; nobody would argue that a robot cannot walk because its legs are not made of flesh and bone and hence are not identical with human legs, so why claim that robot pain must be identical to human pain? As long as they are functionally equivalent Dennett sees no problem with equating his imagined robot’s pain with human pain. However, he recognises that most people do not agree with him on functionalism. Such skeptics will not be happy until computers feel actual pain in the same way normal humans do. Dennett, though, argues that since the personal level concept of pain is in such bad shape and so full of contradictions it may be impossible to instantiate in any creature, not just a robot.

He further argues that an assumption above was that we would have no difficulty constructing a theory of pain; our difficulty would come from an attempt to instantiate this theory in the robot. However this assumption can be questioned, so to test it he tries to construct a theory/model of pain which is consistent with our various different intuitions about pain. His model tries to take account of various different neurological facts about our pain receptors. So, for example, he notes that when through injury our nociceptors are stimulated, the signals travel to the brain through two different types of fibres: rapidly through the large myelinated A-fibres, and slowly through the narrow myelinated C-fibres (ibid p 424). He notes that A-fibres and C-fibres have two different functions. C-fibres transmit ‘sharp’, ‘deep’, ‘aching’ or ‘visceral’ pains, while A-fibres cause ‘sharp’, ‘bright’, ‘stabbing’ pains. Following Melzack and Wall, Dennett notes that the neurological evidence indicates that A-fibres inhibit the effect of C-fibres, closing the gate to pain input transmission.  Simply put, the A-fibres transmit to the limbic system (shared with all animals), while the C-fibres are transmitted to higher cortical areas which only higher primates have. The fact that different fibres transmit to different brain areas predicts a distinction between the hurtfulness and awfulness of pain[9].  Dennett goes on to develop his version of Melzack and Wall’s model of pain. He then goes on to test how this model deals with a variety of reports about the nature of pain. So, for example, a lot of people report that when they concentrate on their pain it loses some of its adverse qualities. This fact is explained because our concentration “raises the volume” of A-fibres, thus inhibiting the effects of C-fibres. He extends this discussion to show how the model can handle data such as the effects of yoga, phantom limb phenomenon etc.

He then considers the effect of various different drugs on pain and notes some strange properties which his model can handle:

A further curiosity about morphine is that if it is administered before the onset of pain (for instance, as a pre-surgical medication) the subjects claim not to feel any pain subsequently (though they are not numb or anesthetized-they have sensation in the relevant parts of the bodies); while if morphine is administered after the pain has commenced, the subjects report that the pain continues (and continues to be pain), though they no longer mind it. Our model suggests that morphine and other analgesics must work on the old low path while leaving the new high path relatively in order, and such is the case. (ibid p. 432)

So again Dennett’s model is shown to capture the relevant facts about pain phenomenon.  He goes on to discuss how his model can successfully handle facts of analgesia before returning to his original question of whether the theory he has sketched is really a theory of pain at all.

His model of pain is represented in a flow chart, and he asks whether we can locate pain, as opposed to its typical cause and effects, in his flow chart. He admits that his flow chart does indeed leave something out, the actual experience of pain. The flow chart is pragmatically useful and can explain things like hypnotic suggestion, the effects of drugs, and the effects of meditation, however it leaves the philosophical questions untouched. Because it does not deal with the feel of pain, he argues that explanations at the sub-personal level will leave out the personal level explanation of pain. However this is not necessarily a bad thing, because the personal level concept of pain is incoherent.

To show this he discusses the case of pain asymbolia which is a brain disorder that results in people who are in pain but not minding having the pain. Such people will laugh if pricked violently with a needle. They admit that they feel the pain; they just do not mind having it. The sub-personal level of explanation can account for this phenomenon by showing lesions near the brain which damage people’s ability to have an appropriate response to pain stimuli, though they still feel the pain. However, at the personal level it seems impossible that a person could feel pain but not mind feeling the pain. Dennett claims that this is because the personal level explanation of pain is incoherent and inadequate.

He further bolsters his views on the incoherency of the personal level concept of pain by discussing a variety of other curious phenomena:

The ordinary use of the word ‘pain’ exhibits incoherencies great and small. A textbook announces that nitrous oxide renders one ‘insensible to pain’, a perfectly ordinary turn of phrase which elicits no ‘deviancy’ startle in the acutest ear, but it suggests that nitrous oxide doesn’t prevent the occurrence of pain at all, but merely makes one insensible to it when it does occur (as one can be rendered insensible to the occurrence of flashing lights by a good blindfold). Yet the same book classifies nitrous oxide among analgesics, that is, preventers of pain (one might say painkillers) and we do not bat an eye. (ibid, p. 443)

Dennett notes that the ordinary language concept of pain is full of inconsistencies like the above one. He goes on to claim:

The philosophical questions that an identity theory (or other ‘philosophical’ theory of pain) would be designed to answer are generated by our desire to put together an account that consistently honours all, or at any rate most of our intuitions about what pain is. A prospect that cannot be discounted is that these intuitions do not make a consistent set. (ibid. p. 445)

Again Dennett is arguing that there may be no explanation of the personal concept of pain because such an inconsistent set cannot exist outside of the vague space of beliefs. In ordinary language people sometimes speak as though pains only exist in the mind of the beholder, yet at other times ordinary language implies that we can have unperceived pain. Again he asks how such a contradictory object can exist.  As a result of these contradictions Dennett argues that the ordinary concept of pain is impeached, and what we require is a better concept. Ultimately he recommends we need to do away with the ordinary concept of pain altogether:

I recommend giving up incorrigibility with regard to pain altogether, in fact giving up all ‘essential’ features of pain, and letting pain states be whatever natural kind states the brain scientists find (if they do find any) that normally produce all the normal effects. When that day comes, we will be able to say whether masochists enjoy pain, whether general anesthetics prevent pain or have some other equally acceptable effect, whether there are unfelt pains, and so forth. These will be discoveries based on a somewhat arbitrary decision about what pain is, and calling something pain doesn’t make it pain. (ibid, p. 449).

This discussion of Dennett’s amounts to an admission that the ordinary language concept of pain will not be explained by our scientific theory. However, this is something we do not need to worry about because the ordinary language concept does not really pick out anything; it is just a bad theory. He further concludes that we will not be able to build a robot which feels pain (in the incoherent ordinary language sense of pain), however we will still be able to build a robot that can feel pain in the scientific sense of implementing a theory of pain.

So we have seen that in Dennett’s two main discussions of pain he denies that the ordinary concept of pain refers to a real entity in the external world. In his earlier discussion of pain he argues that pains cannot be identical with brain states without breaking Leibniz’s law. Since he could not find a good reason to break Leibniz’s law and he argued that Dualism was untenable he concluded that a particular brain state cannot be identical with a particular pain. He avoided the claim that pain states actually referred to brain states by analysing pain language as being non-referential in the same way words like ‘sakes’ and ‘dints’ are non-referential. He argued that any scientific explanation of pain behaviour and verbal reports will not account for the ineffable qualia of pain. However he does not think that this is a problem because he has already shown that the ordinary language concept of pain is merely a way of talking; it does not refer to any entity.

Ten years later in his Why You Can’t Make a Computer That Feels Pain” Dennett expanded on the above points. He further analysed the ordinary language concept of pain and found it to be contradictory. This contradictory object, he argued, could not exist so our scientific concept of pain obviously does not need to be consistent with a non-existent object. Such a strange eliminativist view of pain flies in the face of common sense. Surely if we can know anything, we can know we are in pain. It seems absurd to deny something as fundamental as the existence of pain. In one sense, he argues, pains are real; they are as real as dollars, in the sense that they are intentional objects. We can have a scientific theory of dollars, but they will not answer questions like: how much does that cost in real money. The same is true of pain. People will have beliefs and intuitions about the nature of pain, however a scientific theory cannot always capture all of people’s intuitions. Since Dennett thinks he has shown that the ordinary language talk of pain is vague and contradictory he thinks that a scientific theory can disregard this talk. This means, of course, that our theory is no longer a theory of pain.

Obviously, Dennett’s views on pain have serious implications for our theory that Dennett is guilty of the typical mind fallacy. In the case of mental imaging we surmised that Dennett as a non-imager assumed that we were all non-imagers and unjustifiably built his theory on the assumption that we are all like him. One strand of evidence we used in support of this position was Dennett’s comparison of our ability to call up mental imagery with Shakey the robot’s ability to call up mental imagery. Shakey did not, of course, actually experience mental imagery; it merely reported having such experiences. Dennett argued that we were like Shakey in the sense that, for us, it was a case of all tell and no show. We argued that Dennett was right about his kind of mind, but incorrect that all minds were like his.

By parity of reasoning we should treat pain in the same way as mental imagery. So the reader may expect us to argue that Dennett has no pain experience and this is why he is an eliminativist about pain. This reader may then expect us argue that Dennett is guilty of the typical mind fallacy of incorrectly assuming that all people’s pain experiences are like his. In a sense we do argue in this manner. However, before outlining our argument, we will consider an obvious objection to our position.

It is plausible to claim that people have radically different experiences of mental imagery, from non-imagers to eidetic imagers, while not showing much overt behavioural differences. However, it is completely implausible to assume that people who have no pain or radically powerful pain experiences will give off few behavioural signals. In fact we know from empirical studies that people who do not experience pain are easily discovered, and are discovered at a very early age. Such people suffer from what is called congenital analgesia, and they do not typically live very long. Here is a famous description of a person suffering from the condition:

As a child, she had bitten off the tip of her tongue while chewing food, and has suffered third degree burns after kneeling on a hot radiator to look out the window. When examined… she reported that she did not feel pain when noxious stimuli were presented. She felt no pain when parts of her body were subjected to strong electric shock, to hot water at temperatures that usually produce reports of burning pain or to prolonged ice bath…A variety of other stimuli, such as inserting a stick up through the nostrils, pinching tendons, or injections of histamine under the skin-which are normally considered forms of torture-also failed to produce pain.

       Miss C. had severe medical problems. She exhibited pathological changes in her knees, hip and spine, and underwent several orthopaedic operations. Her surgeon attributed these changes to the lack of protection to joints usually given by pain sensation. She apparently failed to shift her weight when standing, to turn over in her sleep, or to avoid certain postures, which normally prevent the inflammation of joints.

All of us quite frequently stumble, fall or wrench a muscle during ordinary activity. After these trivial injuries, we limp a little or we protect the joint so that it remains unstressed during the recovery process. This resting of the damaged area is an essential part of recovery. But those who feel no pain go on using the joint, adding insult to injury. (Melzack and Wall 1988, pp. 4-5)[10]

I inserted this long quote because it illustrates the dramatic nature of congenital analgesia, and shows that Dennett is obviously not suffering from this illness.  We are not making the claim that Dennett does not experience pain. Rather we are claiming that his pain experience is different from those of the ordinary person. We argue that Dennett, being primarily a linguistic and physicalistic thinker, experiences pain in a different manner to other people. Furthermore, the nature of his experiences results in him drawing different philosophical conclusions than those with other types of minds.

In Berman’s 2008 “Philosophical Counselling for Philosophers” we saw how, in the case of Berkeley and Locke’s debate about abstract ideas, their psychological abilities resulted in them having different philosophical views. More recently in his 2013 paper “Do Personality Traits Mean Philosophy is Intrinsically Subjective” Geoffrey Holtzman has demonstrated that philosophers’ personality traits correlate strongly with how plausible they find various famous philosophical thought experiments. These studies both show that philosophers’ psychological abilities can play a big role in determining their philosophical views. Obviously, though, the fact that some psychological variation accounts for some philosophical differences between philosophers tells us nothing about whether this is the case with Dennett and pain.

To establish whether Dennett has different pain experience we need to first establish that such variation exists and then deal with the problem of whether Dennett’s pain experience is indeed different from others. The case of congenital analgesia obviously differs from normal pain experience; likewise pain asymbolia which we discussed above differs from normal pain experience. However, these cases are severe disorders which are easily detectable by behavioural tests, Dennett obviously does not suffer from disorders of this kind. Nonetheless there are some empirical studies which do indeed show that there are variations in the way people experience their own pains. In their 2011 paper “Genetic Basis of Pain Variability: Recent Advances”, Young et al. showed that people do indeed display variability in the intensity of the pain they feel. This variability was demonstrated to be related to race; for example, African-American and non-Caucasian people reported greater pain than Caucasians in the same clinical settings (“Genetic Basis of Pain Variability”: p.1). It was also shown that females reported greater pain than males in the same clinical settings (ibid: p.1). These variations were all variations of the intensity of pain. Young et al. trace these variations to genetic causes. In his 2005  paper “Sex and Gender Issues in Pain” Roger Fillingim has shown that the sexes do indeed vary in the intensity of pain they report in the same clinical settings. Fillingim focuses on social determinants of such pain behaviour as opposed to the genetic focus of Young et al. Another interesting study of note is Thomas Gosden’s 2008 psychology PhD thesis “Images of Pain: Exportation of the Characteristics and Functions of Pain Related Mental Imagery in Chronic Pain”. In this paper Gosden demonstrates that people with chronic pain have more severe pain when it is associated with involuntary mental imagery.

None of the studies I have mentioned directly support our claim that Dennett’s views are derived from his own pain experience. However the papers are very suggestive. The papers show that philosophers’ individual psychological abilities and personality can affect their philosophical views. We have also shown that there is ample empirical evidence that people have wide variation in their subjective experience of pain. Given Dennett’s strange eliminative views on pain the studies we have gathered together show that it is worth researching into whether his strange views are indeed derived from his individual experiences. To examine this more fully we will explore Dennett’s phenomenology of pain as revealed by his theoretical writings on the nature of pain.

There are four key points which indicate that Dennett has a distinct experience of pain:

(1)   Dennett is a Physicalistic-Linguistic Thinker. Hence, like most physicalistic thinkers he is a monist[11]. He explains away Dreams, Pains, and Mental Imagery  in linguistic terms showing that he is predominantly a linguistic thinker. He is incapable of even imagining pain outside of language.

(2)   When discussing pain in Content and Consciousness he focuses on the fact that the word is not referential. Whether the word ‘pain’ is referential or not is irrelevant to whether pain itself is identical with a brain state. The fact that Dennett cannot conceive of pain itself and focuses on the word ‘pain’ shows that he conceives of pain in linguistic terms.

(3)   In his “Why You Can’t Build a Computer that Feels Pain” he again focuses on how the ordinary language word ‘pain’ has contradictory properties and is therefore theoretically useless. He never considers whether it is language itself which is defective as opposed to pain itself which exists independently of our linguistic representation. Again this is because he cannot conceive of experience outside of language.

(4)   We are not arguing that Dennett has no experience of pain, merely that he has no non-linguistic experience of pain. He cannot see beyond language. All of his consciousness is overwritten by his linguistic and physicalistic nature.

Our analysis of Dennett’s views on mental imagery, color-phi, after images, and pain reveal that he has a distinctive type of mind. Like all Type 2 thinkers he has no experience of the Cartesian Theatre. The fact that Dennett does not have any experience of the Cartesian Theatre is evident from his discussions of mental imagery and ‘real seemings’. His discussion of pain reveals that he is also to a lesser degree a Type 3 thinker. Dennett has a bodily experience of pain, however his experience of pain is deeply immersed in his linguistic competence. He is incapable of even conceiving of pain outside of language. This fact is evident from his theoretical views on pain which centre entirely on the ‘concept’ of pain and ignore the actual experience of pain.

We will get a clearer view of Dennett’s experience of pain by contrasting his views with those of other thinkers. We analysed Thomas Nagel as being a blend of a Type 1 and a Type 2 thinker and, as one would expect, his views on pain are entirely different than Dennett’s.

With Nagel and Dennett, buried beneath the surface of their argumentation, what one gets is a clash of intuitions. Or, to put things more precisely, Nagel has stronger intuitions on certain topics than Dennett, and this influences how they react to philosophical arguments. Throughout the years as Nagel and Dennett have sketched their alternative theories of the mind the issue of intuition has come up over and over again. Here is Nagel in his The View From Nowhere:

But philosophy is not like a particular language. Its sources are preverbal and often precultural, and one of its most difficult tasks is to express unformed but intuitively felt problems in language without losing them… ( The View From Nowhere p.11)

Dennett has often been perplexed by this attitude of Nagel’s, and has correctly pointed out that a lot of scientific progress has been made by not treating our intuitions as sacrosanct. Here is Dennett criticising Nagel’s appeals to intuition:

Nagel is the most eloquent contemporary defender of the mysteries and anyone who suspects I have underestimated the problems I pose for my theory will be braced by Nagel’s contrary assertions. Assertions, not arguments. Since Nagel and I start from different perspectives, his arguments beg the question against a position like mine:  what counts for him as flat obvious, and in need of no further support, often fails to impress me. I assume that whatever the true theory of mind turns out to be, it will overturn some of our prior convictions, so I am not cowed by having counterintuitive implications pointed out to me. No doubt, Nagel who calls his book ‘deliberately reactionary’, is equally unshaken when it is pointed out that his allegiance to certain intuitions is all that prevents him from escaping his perplexity down various promising avenues of scientific research ( The Intentional Stance p.6)

Dennett admits that he and Nagel are begging the question against each other; however he argues that his approach has yielded more pragmatic success so he will continue using it.  Our contention is that both thinkers adopt the stance they do because of the nature of their minds.

This comes out when we see how Nagel views the phenomenon of pain. Nagel’s views on pain are diametrically opposed to Dennett’s.  Nagel has very definite views on the nature of pain. He makes the claim that pain is something that is intrinsically bad, not just something we hate (Mind and Cosmos, p.110). He further argues that an instrumentalist account of pain does not capture the actual badness/wrongness of pain. An instrumentalist account of pain is a Darwinian type explanation. So, for example, it could be argued that the feeling of pain is a warning to people that something is wrong, and that action needs to be taken.  People who are incapable of feeling pain would be at a distinctive selective disadvantage. Nagel does not doubt that an explanation of this kind is possible. However, he argues that this Darwinian explanation does not explain the objective judgement that pain is something that is intrinsically a bad thing. For Nagel, pain has a dual existence: (1) a feeling for which we can give an evolutionary explanation, (2) an object which we can reflect on and judge to be intrinsically bad. He even makes the claim that his objective value judgements are so strongly felt by him that even if Darwinian theory contradicts his views he will reject the Darwinian theory (ibid p. 110).

Dennett of course takes the opposite view to Nagel. For him our intuitions take a secondary role to scientific discoveries. Obviously, part of the reason that Dennett adopts this attitude is because of the fact that in physics experimental research has shown how incredibly unreliable our intuitions are. Our intuitive conception that the world works according to contact mechanics was refuted by Newton three centuries ago. So Dennett thinks that we should be open to similar things happening in the philosophy of mind. We argue that Dennett, though partly motivated by reasons, is also influenced by the nature of his own experiences.

Dennett and Nagel have clearly got entirely different ideas of what the nature of pain is. Nagel thinks that pain is something we feel which goes beyond anything we can express in language (The View From Nowhere, p. 11). While Dennett views pain as something immersed in language, he thinks of us as Joycean machines, creatures who are overridden by language (Consciousness Explained, p. 275). We argue that their different conceptions of pain are based on their own inner experiences. Their different theoretical descriptions of the nature of pain clearly shows how they experience pain differently.


When discussing Dennett’s views on mental imagery, colour phi, and after images we came to the conclusion that he was denying the existence of phenomenal consciousness. We argued that on Dennett’s view we are told by our brain that we have had such and such an experience, and that this report grounds our belief, not a direct experience of phenomenal consciousness.  However, Dennett’s discussion of the phenomenon of change blindness complicates my analysis of his views on the nature of consciousness. Before going into how his views on change blindness complicate our interpretation of his theory, we will first outline what the phenomenon of change blindness is, and how Dennett interprets this phenomenon.

Change blindness is a well-known phenomenon in psychology.  A famous example of change blindness was studied by Resnik, O’Regan and Clark (1997). They present subjects with near identical photographs for 250 milliseconds each, while between each photograph a blank screen is presented for 290milliseconds (Dennett: Sweet Dreams, p. 83). The near identical photographs are flashed back and forth for subjects to watch. Subjects typically do not register any changes in the photographs for at least 30 seconds.  Once they register the change (a white door turning brown at the corner of the picture), or are told about it, the change becomes obvious to them. Furthermore, prior to registering the change on a conscious level their body seems to be aware of the changes occurring. Thus Dennett notes:

In the phenomenon of change blindness for colour changes, for instance, we know that the colour sensitive cones in the relevant region of your retina were flashing back and forth, in perfect synchrony with the white/brown quadrangle. (ibid, p. 88)

He argues that we need to do further tests to understand what areas of the brain are lit up after people become aware of the changes in what they are viewing.  Dennett argues that the phenomenon of change blindness poses serious difficulties for the notion of qualia. To illustrate this point he typically gives the change blindness test to people in lectures.  After people figure out that the change has occurred he asks them the following question:

Now before you noticed the panel changing color, were your color qualia for that region changing? We know that the cones in your retinas in the regions where the light from the panel fell were responding differently every quarter of a second, and we can be sure that these differences in transducer output were creating differences farther up the pathways of colour vision in your cortex. But were your qualia changing back and forth white/brown/white/brown-in time with the colour changes on the screen? Since one of the defining properties of qualia is their subjectivity, their “first-person accessibility,” presumably nobody knows- or could know- the answer to this question better than you. So what is your answer? Were your qualia changing or not? (Sweet Dreams p. 83)

Dennett argues that whatever answer is given to this question will make trouble for the theoretical notion of qualia.  The question we want to consider is whether Dennett’s views on change blindness affect our claim that he is denying the reality of phenomenal experience?

If one considers Dennett’s solution to the colour phi phenomenon, the problem was that the colour change was not happening in the external world; likewise he denied that there was a mental presentation of a red-spot changing into a green spot in a Cartesian theatre. So we had a problem of saying what was occurring with the color phi phenomenon. Dennett argued that our brain tells us that a red light turned into a green spot. The colour phi phenomenon is not presented to the mind’s eye; rather, our brain just tells us that we saw the change. Dennett’s solution has led people to accuse him of only explaining access consciousness but of ignoring phenomenal consciousness.

His position on change blindness helps us better understand why he seems to ignore phenomenal consciousness. For some people the idea of phenomenal consciousness is parasitic on the notion of qualia. When people claim that phenomenal consciousness is a basic fact of our experience they are typically referring to our subjective experience of qualia. So qualia is the basic feel of an experience;  a taste of beer, or a taste of coffee, the feel of touching a smooth or rough surface etc. Qualia is the first person experience. Philosophical lore has it that no amount of third-person scientific research can reveal what the intrinsic experience of qualia in fact is. This is because qualia is an intrinsic first-person phenomenon. Qualia, our intrinsic first-person experience, is of course our phenomenal consciousness. In the case of colour-phi Dennett denied that anything is presented before the mind, while in the case of change blindness he went further and argued that the notion of qualia makes little sense. So it could be argued that his change blindness position is even more radical than his colour-phi position.

However, it still seems to be a bit of a stretch to understand Dennett’s claims on change blindness to be an argument against the notion of phenomenal consciousness. Dennett does after all argue that after about 30 seconds people come to recognise the changes between the two pictures. To some, this indicates that for Dennett, when people become aware of the changes in the pictures the change enters phenomenal consciousness. To think this is to radically misunderstand Dennett’s views on consciousness. Dennett argues that there are three different ways to answer his above question about whether our colour qualia were changing prior to our noticing the changing pictures: (1) Yes, (2) No, (3) I do not know. If someone answers yes then this shows that one of the key criteria of qualia is falsified; the criterion is that qualia are experienced directly in a first-person manner. A person who answers yes is inferring the nature of qualia from a third-person point of view. If a person answers no, he argues, they risk trivialising the notion of qualia. Our qualia get reduced to our judgements that they occurred. This of course means that our qualia can no longer be considered to have an intrinsic nature; their nature will be relational in that they rely on a judgement of a subject for them to exist. Furthermore, since zombies are behaviourally indistinguishable from ordinary people from the point of view of behaviours then a person who answers no to the above question will be committed to the view that the zombie has qualia. After all the zombie will make the same qualia judgements as everyone else.

If people answer that they do not know whether the qualia are changing back and forth then there can be two reasons for their confusion: (1) Despite what they originally thought they do not really know what qualia are, or (2) They do not know what the answer is because qualia have properties which go beyond the reach of both third-person and first-person science. Dennett argues that a person who makes Claim 2 is placing themselves outside science so their views are not really candidates for serious consideration. So he basically argues that Claim 2 is the correct conclusion to draw from the fact of change blindness.

So Dennett’s change blindness argument purports to show that the philosophical notion of qualia is a mess. He further supports this claim with his thought experiment “The Strange Case of Mrs Clapgras”. In his earlier paper “Quining Qualia” Dennett presented a series of thought experiments which were designed to show that the philosophical notion of qualia was incoherent.  The combined force of his various arguments is designed to make people less comfortable with using the term qualia, to make them realise that despite appearances the term qualia is senseless.

If we take Dennett’s arguments against qualia seriously this will undercut one of the primary objections against his views on the colour-phi phenomenon. The objection to Dennett’s colour-phi phenomenon was that it was phenomenologically inaccurate. It is argued that we know from experience that we observe the colour change from red to green, and that we then form our judgements based on the observation of this change. We do not form the judgement independently of the observation. This argument is designed to show that from the point of view of first-person experience we know that Dennett’s conclusions on colour-phi are false; we directly experience the phenomenon despite what he thinks.  However, if one takes his arguments against the notion of qualia seriously, one is given pause to doubt whether the argument from first person experience is sufficient to refute Dennett’s claim.

A person who used first-person experience as an objection against Dennett’s views on color-phi, mental imagery etc. would be arguing that despite the theoretical problems with these phenomena (no evidence for the existence of figment, no evidence for the existence of a mind’s eye), our first-person experiences shows that the phenomena exists. They would argue that our phenomenal experience grounds our judgement; it is not the case that our brain merely tells us that we experience mental imagery; we actually experience the qualia of an image being presented before the mind’s eye.  Dennett’s discussion of change blindness (and his other thought experiments) show that the above argument against his view of the mind does not work.

Qualia are often presented as the most basic thing we can know. We can doubt all else but we cannot doubt how our most basic experiences seem to us. What Dennett’s thought-experiments purported to show was that despite what we may believe we do not really have any firm handle on what these qualia are. He showed that when we examine these qualia closely we do not in fact have a clear grasp of their nature. So, for example, the card trick showed that while we may think we are experiencing definite qualia in our peripheral vision we actually are not. In the case of change blindness we have no clue as to whether qualia are being flashed back and forth prior to our noticing. So all of Dennett’s thought-experiments combined do seem to show that the philosophical notion of qualia is a mess. This fact has serious implications for those who argue against Dennett’s explanation of the colour-phi phenomenon by appeal to facts about how things seem to them. Since Dennett has cast serious doubt on the reliability of our claims about how things seem to us, then appeal to such seemings as an incorrigible foundation to refute his views on mental imagery and colour phi is questionable to say the least.

Critics of our position could argue that Dennett’s arguments against qualia undercut our claims that he is guilty of the typical mind fallacy. A central aspect of our argument centred on people’s introspective reports about what they experience, while Dennett’s various different arguments show that people are not authoritative about what they experience and that they do not really know what they mean when they use the term qualia. We argue, on the contrary, that Dennett’s arguments against qualia have no effect on our central claim.

If we argued entirely from people’s introspective reports and nothing else then Dennett’s arguments about qualia would undercut our claims. However we provide much more evidence than this. Firstly we rely on neurological reports to support our claims. We have discussed evidence from people like Kosslyn, Ganis et al. which shows that people who are imagers have the occipital lobe activated to a greater degree than non-imagers. So our introspective reports about variations in people’s abilities to form mental imagery are backed up by impressive neurological evidence, and by the impressive behavioural evidence summarised earlier in the paper. A critic could reply that based on standard definitions of qualia we cannot appeal to neurological evidence to support our claims. This criticism does not apply to us; we are not using the theory-laden notion of qualia, rather we just speak of experience. We see no valid reason that we should be barred from using neurological evidence and behavioural evidence to support our claims. We agree with Dennett that fantasies about zombies which are identical to ordinary people but which lack consciousness are pointless. When we speak of people who have different mental abilities we argue that these different abilities can be detected via behavioural tests, introspection, verbal reports and neuroscientific tests etc. Using such diverse strands of evidence has led to discoveries of previously unknown subjective variation in mental imagery. It has led to a diagnosis of coloUr blindness, synaesthesia etc.

Another type of evidence which we rely on is behavioural evidence, more precisely verbal behaviour, in the form of texts that our subjects produce. People’s philosophical views on various topics often inadvertently reveal the nature of their type of mind. So bearing in mind the fact that our evidence is not limited to simply introspective reports, then it should be obvious that Dennett’s evidence that introspection is sometimes unreliable and gives us incomplete knowledge on a topic does not refute our Typical Mind Fallacy argument.

Nonetheless it does raise a more fundamental question for our theory. We have been assuming that Dennett disagrees with us that people have variations in their imagery abilities. However this does not have to be the case. Consistent with his own theory of consciousness he could argue that it is true that people report variations in their ability to form mental images, also that it is true people who claim to be non-imagers and people who claim to be imagers have different patterns of neural activation. So far all of this is consistent with his theory of consciousness.  Nothing in Dennett’s theory forces him to deny any of the previous facts about imagery. What Dennett has to argue is that in the case of the imager and the non-imager neither of them is presented with any imagery. So imagers will have some process in their brain which results in their brain forcing them to form the judgement that they experience mental imagery (in a similar way that we form the judgement that we experience colour-phi). Non-imagers’ brains will not form judgements of experiencing imagery. Now obviously if Dennett wants to form such a model he will need to provide neurological evidence to support it; however he would argue that he is more likely to find evidence for his model in the brain than he is to find a Cartesian theatre in the brain. The main point is that there is no problem in principle with him acknowledging that people have variations in their mental imagery.

The problem with Dennett’s model is that it does not do justice to the phenomenology of mental imagery. People (even poor imagers like me) experience mental imagery. It is more than being told that they experience such imagery, it is an actual experience. This reply of course brings us back to our original debate. Can we be so sure that we have a direct experience of something? We would argue that we can. Our experiences are foundational and nothing Dennett has said changes this fact. His arguments against qualia are arguments against a philosophical construction; they are not arguments against experience. Nobody who has direct experience of colour-phi, mental imagery and pain could deny that direct experience. However, for a thinker like Dennett whose experience is primarily bodily and linguistic his theory of consciousness will make perfect sense. In fact we would expect a person with his type of consciousness to endorse the type of views that he does.



Dennett’s eliminativist views on consciousness have puzzled interpreters for years. Some people find his arguments utterly compelling, while others believe that his arguments leave out key aspects of the mind. We have tried to make plausible the claim that such disagreements result from people having different types of mind and not factoring these differences into their theories. We suggest that philosophers can avoid making such mistakes by engaging in introspective research, submitting themselves to psychological tests etc. Understanding individual variation of minds and how such variation influences theorising can only help to make philosophy more objective than it has heretofore been. Ours is but a first step into what we hope to be a more scientific way of practising philosophy.

[1] William James quotes taken from Berman’s Penult.

[2] A non-imager interviewed by Galton, the astronomer Major John Herschel held a similar view to Dennett:      “ The questions presuppose assent to some kind of proposition regarding ‘the minds eye’ and the ‘images which it sees… It is  only by a figure of speech that I can describe my recollection of a scene as a “mental image” .” (Galton 1907). Galton’s tests revealed that Herschel was in fact a non-imager. We make the same claim about Dennett.

[3] In his 1958 paper “Concerning Mental Pictures” Arthur Danto describes his own imaging abilities whereby he can perform precisely the tasks  which Dennett  claims are impossible. This again indicates that Dennett is describing his own types of images rather than the images which other subjects have.

[4] See David Berman: A Manual of Experimental Philosophy (2009).

[5] See Bill Faw: Outlining a Brain Model of Mental Imaging Abilities (1997).

[6] See the Introduction of Berman’s Berkeley and Irish Philosophy,   his A Manual of Experimental Philosophy, and his Philosophical Counselling for Philosophers.  Also  William James Principles of Psychology for historical evidence of the typical mind fallacy.

[7] The data on Dream Experiences and Mental Imagery is only suggestive. We are not claiming a proof that people who are non-imagers are always vivid dreamers.  Rigorous experimental tests will need to be done in order to prove this conclusively.

[8] See Kolers, P. A and Von Grunau (1985) “Shape and Color in Apparent Motion” Vision Research 16 pp329-335.

[9] I am here simplifying Dennett’s model, however the simplification does not effect the argument of the paper.

[10] Quote taken from Nikola Grahek: Feeling Pain and Being in Pain p 8.

[11] The link between Monisms an physicalistic types is explored in more detail in Berman 2008 Penult.