1984: A POSTMODERN HORROR STORY.

In my undergraduate days I attended a class on Rorty’s (1989) ‘Contingency, Irony and Solidarity’[1]. The class centred on analysing a great text and for one term our class analysed CIS. Typically this class involved students attacking every aspect of CIS for its various incoherencies. Professor Berman (no fan or Rorty) was typically cast in the role of defender of Rorty’s position. By the end of the term most of us still didn’t buy Rorty’s radical pragmatism but we did appreciate that his position was much more subtle and defensible than it seemed at first.

In one really interesting class we discussed chapter 8 of CIS where Rorty analysed George Orwell’s great novel 1984. One of the most terrifying moments in 1984 is the scene where Mr O Brien is torturing Winston. Rorty discussed this scene in detail and considered the question of whether we have any deep human nature within us that could be used to overcome O Brien. Rorty’s sobering conclusion was that we have no deep routed human nature within us that would help Winston to resist O Brien’s attempt to break him down and re-build him in his own image:

I take Orwell’s claim to be that there is no such thing as inner freedom, no such thing as an “autonomous individual”, to be the one made by historicist, including Marxist critics of “Liberal Individualism.” This is that there is nothing deep inside each of us, no common human nature, no built-in human solidarity, to use as a moral reference point. There is nothing to people except what has been socialised into them-their ability to use language, and thereby to exchange beliefs and desires with other people”  (CIS p.177)

“Orwell helps us to see that it just happened that rule in Europe passed into the hands of people who pitied the humiliated and dreamed of human equality, and that it may just happen that the world will wind up being ruled by people who entirely lack those sentiments. Socialization, to repeat, goes all the way down, and who gets to do the socializing is often a matter of who manages to kill who first” (Rorty CIS p. 185)

Rorty discusses how certain people want to discount the second part of 1984 and celebrate the first part e.g. Howe (1984) and Trilling (1984). Such people have some read Orwell as a kind of  realist philosopher defending commonsense truths like 2+2=4, the past exists, moral realism, epistemological foundationalism etc. Rorty offers an entirely different reading of 1984 than people like Howe and Trilling.  His reading can be best understood in the context of his views on the self which he sketches in chapter 3 of CIS.

Philosophers have been debating the nature of the self since as-long as the subject has existed. A lot of the discussion in philosophy has centred on dreams of human exceptionalism, dreams that who we are, what our essential nature is, will show us how we are something more than mere organised matter created by the contingent forces of nature. Thus we have Plato’s argument that our grasp of mathematical properties which we do not experience in the empirical world as evidence that we are creatures who are special because we can apprehend timeless truths beyond the contingent messy experiences in the real world. We have Descartes arguing that we are special because unlike other animals we have immaterial minds with a special metaphysical status because it was the one thing in the universe whose existence cannot be subject to methodic doubt. We have Kant’s claim that the categories of the mind structure our conception of the world, so the mind in a sense partially creates the world, and the human self has a fixed nature not changed by its interactions with the world. In more recent times philosophers, like Husserl, and Nagel, linguists like Chomsky, and neuroscientists like Eccles have tried desperately to find some mark of the mind that makes it special and not subject to contingency.

Attempts to make the human special should not be confused with attempts to study empirical differences between humans and other animals. We can note that humans can do some things; use language, build computers etc that other creatures cannot do, just like other creatures can do things that we cannot do; see in sonar (unaided by technology), fly (unaided by technology) etc. What facts like these show is that humans have found ways to cope with their environment which are different than how other creatures cope with their environment. It doesn’t show us that we are somehow special and the pinnacle of nature.

Now in a blog-post of this size I obviously cannot go into a detailed assessment of every philosopher from Plato to Nagel and disprove every argument that they have presented. Rorty as a pragmatist emphasises the fact that trying to hold on to descriptions of reality that give humans an essential nature that remains untouched by the contingencies of experiences becomes more and more difficult as we learn more and more about the universe we live in. Our studies in neuroscience, chemistry and bio-physics show that human cognitive and bodily processes are created by interactions of chemical properties, which can be explained as resulting from the basic nature of physical forces and fields operating at a lower level[2]. Our studies of natural history have shown that humans are just another hominid who has evolved from a common ancestor we share with Chimpanzees. We have fossil evidence, to help us study the natural history of our ancestors, as they evolved over the eons; we also have genetic evidence, etc. We use evidence from radioactive dating to determine the age of the fossils that we are studying etc. The evidence is overwhelming that we are animals who have been shaped by our chemical/physical constitution being selected by the environment over millions of years. The upshot of this is that our species was “designed” by its environment over millions of years and our structure is the result of countless accidents; thus no dinosaur extinction and perhaps no evolution of humans or any other creatures with intelligence like ours on this planet. And importantly from Rorty’s point of view is that our knowledge claims are an interconnected web of mutually supporting beliefs. If you want to deny that evolution took place you are not just denying biology but you must deny the physics that we use to date the fossils, you must deny the evidence from plate tectonics, from genetic analysis etc. Because of the interacting nature of our scientific theories rejecting one strand may cause you to reject other strands etc. And if you are driven to deny basic chemistry and physics in order to disprove evolutionary theory then you will be hard pushed to explain anything.

Now two immediate objections arise about what I am saying firstly it could be argued that my explication of Rorty is very misleading. Rorty viewed science as one vocabulary amongst others which we can use to cope with the flux of experience, but I seem to be describing him as a proponent of scientism. To be clear I do not think that Rorty was into scientism. However, I think that Rorty’s emphasis on the holistic nature about our total theory of the world and his pragmatism did commit him to taking science more serious than he did at times. From a pragmatist point of view given the incredible predictive control that science gives us over the world and the interconnected nature of our scientific world picture, taking science seriously is imperative if we want to gain better prediction and control of the world, something that is important for things like medicine, disease prevention etc.

Rorty doesn’t deny that science is a really useful tool and one that should be cultivated and promoted as much as possible. In an interview with Daniel Dennett discussing the nature of science Rorty argued that if science gives us the tools to cure diseases, and build Aeroplanes that is great, but if political philosophers like Locke, and Mill help us create social systems that decrease suffering and war then that is a great thing also. I think that Rorty has a point, philosophers, historians etc have really helped us understand our world and decrease suffering and they sometimes use different tools, e.g. thought-experiments, conceptual analysis, logical arguments etc. So there is room as Rorty argues for different tools, and some tools are better than others depending on the job. That said; integrating scientific methods into disciplines like philosophy, and history is really improving the accuracy of the disciplines, thus we have Bayesian methods used in history, experimental methods used in philosophy etc. More and more disciplines are operating in an opportunistic, pragmatic manner adopting and using tools from disciplines etc. And these different disciplines in their own ways have arrived at the conclusion that humans are biological organisms whose structure has been determined by time and chance.

Another objection that could be made is that humans might have been created by time and chance when viewed from the biological perspective but at the level of physics things are causally determined. Thus the original conditions during the big bang determined that life would unfold as it did and given these exact original conditions life would have unfolded exactly as it did. So the element of chance is removed. An argument like this though is extremely weak. Firstly the notion of causality is not necessarily one that is even used in physics (see Russell 2013, and Ladyman and Ross 2013), so if this approach is to be adopted a theorist who adopts it is going to have to explain what he means by physics causally determining that humans must have evolved in the way that they did. Secondly given discovers in quantum physics there is no reason to think that the universe at the fundamental level of physics is determinate. Here I think that if a theorist wants to make the claim that the universe causally determined that humans must have existed they will need to engage with contemporary physics to make their point seem plausible. Furthermore it should be noted that even if the universe were a determinate system, explaining how the universe itself came into existence and how this process determined that humans must have existed is a hopeless project.

Another version of this argument could be that while physics may not have causally determined that humans had to exist there are fundamental constraints on what type of organism evolution could have built. Thus, for example Fodor and Pylyshyn (2015) note that some features of our perceptual abilities are determined by principles of optics and aren’t plausibly explained by natural selection. Furthermore Fodor and Piattelli-Palmarini (2013) have presented evidence of constraints (mathematical and biophysical laws) on the type of structures that evolution can build. But none of this shows that humans must have been built by evolution, a cursory glance at our planet and fossil record shows that diversity is the rule. Humans it seems to have been created as a result of time and chance as Rorty correctly notes.

But Rorty makes a further, much more controversial claim; that humans are all nurture and linguistic through and through. Rorty has argued that there is nothing more to humans than what has been socialised into them, that there is no human nature which goes beyond our linguistic socialisation. This position on the face of it seems to be the exact position that Steven Pinker was warning against in his ‘The Blank Slate: The Modern Denial of Human Nature’ (2002). In that book Pinker provided a battery of empirical evidence which shows that IQ is largely hereditary, identical twins showing similar patterns of behaviour despite being brought up in different environments. Other theorists like Bloom, Spelke and Carey have provided some reasonably compelling evidence that people have innate concepts of objects, agents and causation. A lot of the evidence on innate architecture is controversial and subject to current empirical debate. But there is little doubt that humans have social instincts, and innate competencies that make it possible for them to learn from their environment. Without this much architecture, (and much more) any social learning would be impossible. It is frankly impossible to take Rorty’s claim that humans are socialisation all the way down at face value. It is impossible for a creature to learn from their environments without implementing some kind of learning programme to parse the data. No model of knowledge acquisition is possible without postulating some kind of innate architecture. As Chomsky correctly notes a dog or a telephone is exposed to as much linguistic data as a child is. We need something to explain how children learn to speak and dogs and telephones do not. Innate architecture is necessary in this kind of explanation, not necessarily an innate domain specific language faculty but at least some ability to parse linguistic data from its environment that other creatures cannot. On pain of radical incoherence Rorty cannot deny this truism. In interpreting him I will adopt Davidson’s principle of charity, the more absurd a theory you attribute to a thinker the less likely it is you have interpreted him correctly. Rorty has long admired the work of Daniel Dennett claiming that Dennett did for consciousness what Wittgenstein did for language, and Ryle did for mind by naturalising it. Dennett of course made ample use of innate architecture in explaining the structure of our minds, but despite arguing that we need this architecture to help us learn a language once we learned a language this radically changes the type of creatures we are. I will assume that Rorty means something similar that once we acquire a language and are socialised into a culture who we are is radically determined by the social environment we are socialised in. I am not sure if I am correct to interpret Rorty in this way but it seems almost inconceivable that he could really think we are blank slates.

But even this modified version of Rorty’s claim is radically at odds with Pinker’s view of the mind. Interestingly Pinker also discussed Orwell in his ‘The Blank Slate’. Pinker noted that the vision propounded by O Brien was almost identical to the view of the world proposed by postmodernists, and by a lot of totalitarian regimes like some Marxists societies. Since Pinker and Rorty both discuss the dialogue in detail I will paste a section of it here:

“O’Brien silenced him by a movement of his hand. ‘We control matter because we control the mind. Reality is inside the skull. You will learn by degrees, Winston. There is nothing that we could not do. Invisibility, levitation—anything. I could float off this floor like a soap bubble if I wish to. I do not wish to, because the Party does not wish it. You must get rid of those nineteenth-century ideas about the laws of Nature. We make the laws of Nature.”

“But the world itself is only a speck of dust. And man is tiny—helpless! How long has he been in existence? For millions of years the earth was uninhabited.’ ‘Nonsense. The earth is as old as we are, no older. How could it be older? Nothing exists except through human consciousness.”

“But the rocks are full of the bones of extinct animals— mammoths and mastodons and enormous reptiles which lived here long before man was ever heard of.’ ‘Have you ever seen those bones, Winston? Of course not. Nineteenth-century biologists invented them. Before man there was nothing. After man, if he could come to an end, there would be nothing. Outside man there is nothing

“But the whole universe is outside us. Look at the stars! Some of them are a million light-years away. They are out of our reach forever.’ ‘What are the stars?’ said O’Brien indifferently. ‘They are bits of fire a few kilometres away. We could reach them if we wanted to. Or we could blot them out. The earth is the centre of the universe. The sun and the stars go round it.’ Winston made another convulsive movement. This time he did not say anything. O’Brien continued as though answering a spoken objection: ‘For certain purposes, of course, that is not true. When we navigate the ocean, or when we predict an eclipse, we often find it convenient to assume that the earth goes round the sun and that the stars are millions upon millions of kilometres away. But what of it? Do you suppose it is beyond us to produce a dual system of astronomy? The stars can be near or distant, according as we need them. Do you suppose our mathematicians are unequal to that? Have you forgotten doublethink?’ Winston shrank back upon the bed. Whatever he said, the swift answer crushed him like a bludgeon. And yet he knew, he KNEW, that he was in the right. The belief that nothing exists outside your own mind—surely there must be some way of demonstrating that it was false? Had it not been exposed long ago as a fallacy? There was even a name for it, which he had forgotten. A faint smile twitched the corners of O’Brien’s mouth as he looked down at him. ‘I told you, Winston,’ he said, ‘that metaphysics is not your strong point. The word you are trying to think of is solipsism. But you are mistaken. This is not solipsism. Collective solipsism, if you like. But that is a different thing: in fact, the opposite thing.” All this is a digression,’ he added in a different tone. ‘The real power, the power we have to fight for night and day, is not power over things, but over men.’ He paused, and for a moment assumed again his air of a schoolmaster questioning a promising pupil: ‘How does one man assert his power over another, Winston?’ Winston thought. ‘By making him suffer,’ he said. ‘Exactly. By making him suffer. Obedience is not enough. Unless he is suffering, how can you be sure that he is obeying your will and not his own? Power is in inflicting pain and humiliation. Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing. Do you begin to see, then, what kind of world we are creating? It is the exact opposite of the stupid hedonistic Utopias that the old reformers imagined. A world of fear and treachery and torment, a world of trampling and being trampled upon, a world which will grow not less but MORE merciless as it refines itself. Progress in our world will be progress towards more pain. The old civilizations claimed that they were founded on love or justice. Ours is founded upon hatred. In our world there will be no emotions except fear, rage, triumph, and self-abasement. Everything else we shall destroy—everything. Already we are breaking down the habits of thought which have survived from before the Revolution. We have cut the links between child and parent, and between man and man, and between man and woman. No one dares trust a wife or a child or a friend.

‘We control life, Winston, at all its levels. You are imagining that there is something called human nature which will be outraged by what we do and will turn against us. But we create human nature. Men are infinitely malleable. Or perhaps you have returned to your old idea that the proletarians or the slaves will arise and overthrow us. Put it out of your mind. They are helpless, like the animals. Humanity is the Party. The others are outside—irrelevant.” (George Orwell 1984)

I have quoted this large section because it demonstrates a terrible possibility that human society could permanently and become a horrible dystopia ran by psychopaths and thugs. Pinker warns of this possibility and that without a notion of objective truth we are powerless to respond to such people like O Brien. Pinker of course wildly misses the point of the novel, even if we do have a notion of objective truth independent of culture this gives us no defence against O Brien. O Brien openly admits that for practical science we need to think 2+2=4, respect the law of non-contradiction etc. He just has developed a gross intellectual justification for the torture of people who disagree with the party’s world view. Rorty is surely correct when he notes of Orwell:

 “He convinced us that there was a perfectly good chance that the same developments which had made human equality technically possible might make endless slavery possible. He did so by convincing us that there was nothing in the nature of truth, or man, or history was going block that scenario, anymore than it was going to underwrite the scenario which liberals had been using between the wars. He convinced us that all the intellectual and poetic gifts which had made Greek philosophy, modern science, and Romantic poetry possible might someday find employment in the Ministry of Truth.” (CIS p. 176)

 In 1984 it is technology and science that is used to gain the thugs control, and the public intellectuals who justify the behaviour of those in control. Pinker’s idea that postmodern philosophy can be used to justify a cruel regime shouldn’t obscure the fact that it is objective science and technology that was used to put the thugs in place in the first place.

Pinker in his recent book ‘The Better Angel’s of Our Nature’ argued that science, and philosophy have helped us radically reduce the amount of violence in the world over the two hundred thousand years. Pinker argues for a Hobbes type view of human nature (but unlike Hobbes he correctly notes that humans are an inherently social species). Pinker put together an impressive amount of data to show that since the enlightenment violence has gone down radically. So he would argue that he has an empirical justification for thinking that enlightenment values make it less likely that people like O Brien will get in control. The strength of this argument though lies in the strength of his data. On this issue anthropologists like Cathryn Townsend, and Helga Vierich have shown that Pinker’s data doesn’t stand up to critical scrutiny. But that is a subject for another blog. I will stop here.

[1] Henceforth ‘Contingency, Irony and Solidarity’ will be referred to as CIS.

[2] Here because of time constraints I have to abstract from such niceties as functionalism, levels of explanation: things like real patterns etc. but I have discussed these issues in detail in other blogs.

Quine and Chomsky: Disopositions or Rules?

My distinction between fitting and guiding is, you see, the obvious and flat-footed one. Fitting is a matter of true description; guiding is a matter of cause and effect. Behaviour fits a rule whenever it conforms to it; whenever the rule truly describes the behaviour. But the behaviour is not guided by the rule unless the behaver knows the rule and can state it. This behaver observes the rule ( Quine: Methodological Reflections on Current Linguistic Theory).

Quine begins his critique of Chomskian linguistics by distinguishing between two different types of rules: Fitting and Guiding. He claims that Chomsky uses a third intermediate type of rule; this is a type of rule which Quine claims is an implicitly guiding rule. Quine claims that Chomsky thinks that ordinary speakers of English are guided by rules even though these rules cannot be stated by the English speaker. According to this Chomskian picture, we can have two extensionally equivalent grammars, each of which fits the behaviour of the child, neither of which explicitly guides the child, and only one of which is true of the child. Quine claims that if this intermediate way of following a rule is to be made sense of, we need some type of evidence which will help us decide which grammar the child is implicitly following. He claims that a person’s disposition to behave in determinate ways, in determinate circumstances, is the way to make sense of which grammar the person is following. However, obviously these dispositions must go beyond well-formedness if we are to use them to explain the distinction, because extensionally equivalent strings are indistinguishable from each other in terms of behaviour. He speculates that such dispositions which may be relevant are those such as the disposition to make certain transformations and not others; or certain inferences and not others. Quine further notes that he has no problem with dispositions; and points out that a body has a disposition to obey the law of falling bodies, while a child has a disposition to obey any and all extensionally equivalent grammars. However, he can make no sense of Chomsky’s intermediate notion of rule following. He alludes to the ironic fact that while Chomsky seems to have no difficulty with the obscure notion of implicit guidance by rules, he has serious difficulties with the humdrum notion of disposition.

Chomsky replied to this criticism by first questioning the analogy of a child following the rules of grammar with a body obeying the law of falling bodies. He argued that this is a singularly misleading analogy because the rules of English grammar do not determine what a speaker will do in a given context in the same way that the law of falling bodies will determine, if a person jumps from a building, he will hit the ground at a specified time. According to Chomsky, what the rules of English grammar tell us is that English speakers will understand and analyse certain sentences in certain ways, and not in other ways. In other words, the linguist in Chomsky’s sense is trying to discover regularities in a person’s linguistic competence; the linguist is not after a theory of performance. This reply of Chomsky’s has two different strands to it: first, his distinction between competence and performance; second his claim that we can have no scientific theory of performance. His argument that by using idealisations, such as abstracting from memory and performance, we can gain some understanding of subjects like phonology etc. has something to recommend it. Idealisation plays a vital role in any science, and the appropriateness of an idealisation is to be judged by the success of the science which uses it. Given that generative linguistics has had some success, we can tentatively accept his idealisation. However, Chomsky’s confident assertion that we cannot have a science of human performance/behaviour has little to recommend it. Quine’s claim that people are obeying extensionally equivalent rules if their performance conforms to those rules is a perfectly legitimate claim. And Chomsky has offered us no reason to think otherwise.

Chomsky adds that even if we leave aside the purported disanalogy between the law of falling bodies and linguistic competence, Quine’s formulation would still fail because he is guilty of treating physics and linguistics inconsistently. What Quine should have said if he wanted to remain consistent was:

English speakers obey any and all of the extensionally equivalent English grammars in just the sense in which bodies obey the law of falling bodies or the laws of some other system of physics that is extensionally equivalent (Chomsky: Reflections on Language p. 188)

Chomsky claims that when put in these terms, we see that linguistics is no more in need of a methodological cure than physics is. Chomsky’s argument here is merely repeating the criticism which he made against Quine’s Indeterminacy of translation argument. He is claiming that Quine is guilty of treating underdetermination as fatal in linguistics but as harmless in physics.

Quine’s criticism was that Chomsky was using a third type of rule which was obscure because it went beyond fitting the behaviour of the person, but was not consciously guiding the behaviour of the person. Chomsky notes that people are rarely, if ever, guided by rules in the sense of being able to state the rule they are following. Furthermore, he argues that linguists can go well beyond rules that merely fit the behaviour of people. He says that if we accept the same realistic attitude towards linguistics that we do with physics, then we can say that people are obeying rules which are really encoded in their brains, but of which they are usually not consciously aware. So we can use various different pieces of evidence to decide between extensionally equivalent grammars.

To make his criticism of Chomsky’s conception of rules explicit, Quine discussed a toy example of two extensionally equivalent grammars which a person could obey. He asks us to imagine a string (abc) which can be divided up as extensionally equivalent grammars (ab) (c) versus (a) (bc). He notes that from a behavioural point of view, we could say that a person is following either of the grammars and that if we ask the native which is the correct grammar he will not be able to tell us. In this situation, he claims that Chomsky’s view that the person can be said to be implicitly guided by rule (ab) (c) as opposed to (a) (bc) is senseless.

Chomsky argues that that there is no mystery in deciding whether the child is implicitly guided by rule system (a) as opposed to (b). In, this situation, he argues all we have is a problem which can be solved by the ordinary methods of the natural sciences. Quine had argued that a natural solution to deciding whether (ab) (c) or (a) (bc) was the rule which implicitly guided the people, would be to ask them which rule they followed. He notes, of course, that the people will not be able to tell us. So he argues that since the natives cannot decide which rule they are implicitly following, and both rules are compatible with the dispositions of the ordinary language user, then the notion of one of the rules implicitly guiding the child is senseless. Chomsky agrees that asking the native will not get us far in deciding between rule (a) and rule (b), and he suggests a different way of dealing with the problem. He cites various different types of evidence which could theoretically bear on the problem. He discusses how the linguist could have evidence which suggests that intonation patterns are determined by structure. This evidence could be derived by studying our own language as well as other languages. He argues that such evidence might bear on the choice between the two proposed grammars. So we might discover that the rules needed for other cases give the correct intonation if we take the constituents to be those of (a) instead of (b). Whether this type of evidence really occurs or not is irrelevant. Chomsky’s point is that Quine is incorrect to assume that no evidence on the topic is attainable beyond evidence gained by questioning the natives.

Chomsky cites a quote from Quine’s paper ‘‘Linguistics and Philosophy’’ where Quine claims that there is much innate apparatus which will need to be discovered to tell us how the child gets over the great hump that lies beyond ostension. Quine further notes that if Chomsky’s anti-empiricism says nothing more than that conditioning is not sufficient to explain language learning, then it is of a piece with his indeterminacy of translation argument. Chomsky notes that if Quine really believes that there is further innate apparatus waiting to be discovered by science, then this casts doubt on his claims in both his ‘‘Methodological Reflections’’ paper and in Word and Object. Chomsky asks us to consider the sentence: ‘ABC’. Quine had claimed that we have no evidence which can help us determine whether a subject is implicitly following the rule (ab) (c) or (a) (bc). However, according to Chomsky, given that Quine has no problem with innate mechanisms of any sort, then surely it is possible we will discover an innate mechanism in our species which determines that we follow (ab) (c) as opposed to (a) (bc) or vice versa. So in this respect, he claims, Quine (1969) holds doctrines which run counter to the doctrines he accepts in (1960) and (1970).

Quine would deny this charge of Chomsky’s. What Quine has said is that he has no problem with innate apparatus of any sort as long as such apparatus can be made sense of behaviourally. When he is discussing the notion of rule following, he is claiming that if two grammars both equally fit the behaviour of the subjects, and each subject is not consciously guided by these rules, then to claim that the person is implicitly guided by the rule is senseless. The postulation of innate apparatus which ensures that the child follows the rule (ab) (c) as opposed to rule (a) (bc) is pointless unless we have some behavioural evidence to justify such a postulation. The type of evidence which Chomsky claims could be useful to the linguist in deciding whether the subject is following rule A or B, i.e. perceived structure in intonation, is behavioural evidence so it does not bear on the type of considerations which Quine is concerned with.

A Chomskian could argue that if Quine would accept evidence such as structure intonation as a way of distinguishing between extensionally equivalent grammars, then he would be committed to accepting the Chomskian intermediate notion of a rule which implicitly guides the subject. Quine would of course rightly deny this. He would claim that, if we assume for the sake of argument that Chomsky’s made-up example of structure in intonation pattern is correct, then it would follow that the two extensionally equivalent grammars do not fit the totality of linguistic behaviour of the relevant subjects. He could argue that his flat-footed distinction between the two different types of rules can accommodate the type of example which Chomsky put forth. Quine could further claim that Chomsky has not fully explained his notion of the third intermediate notion of a rule.

This debate about the notion of a rule is a real point of contention between both thinkers. Chomsky has tried to explicate his intermediate notion of a rule as something in the brain which unconsciously guides the language user. This conception of a rule does not fit in neatly with Quine’s two conceptions of a rule; the question is whether Chomsky’s third notion captures something real which is missed by the two Quinean notions.

For simplicity’s sake, let us consider a particular rule of English, the subject-auxiliary rule which states that when forming a question, one must move the main auxiliary to the front of the sentence. According to Chomsky, the subject auxiliary rule is a rule of English (and Spanish) though not of all languages; so it is a rule which requires some experience in order for acquisition of it to take place. Children are only capable of learning this rule because they are genetically programmed to follow certain linguistic universal principles such as the following one: All languages are structure dependent.

On the Chomskian conception, the child is guided by certain rules which he does not know and cannot state (unless he has knowledge of linguistic theory). The question we now need to ask is why Quine would have difficulties with these rules, and can such rules be accommodated within his flat-footed conception of rules? Quine’s emphasis on reinforcement and induction means that he would more than likely feel that the postulation of such innate rules is unnecessary. Nonetheless let us assume that Chomsky is correct that poverty of stimulus considerations dictate that rules such as structure dependence are known innately and that parametric variations of these universal rules result in particular languages being spoken. The question which needs to be asked at this point is whether we can make sense of the notion of ‘rule’ which Chomsky is postulating here?

When Quine wrote his ‘‘MRCLT’’ in the early 1970s Chomsky was operating with a rule-based conception of language. From about 1980 onwards, with the inception of his principles and parameters approach, Chomsky’s conception of language changed. Though the change from a rule-based approach to a principles and parameters approach was a significant empirical advance, it does not have any effect on the criticisms which Quine brings to bear in his ‘‘MRCLT’’. So let us consider a question such as the following one: ‘Is the man who is happy over thirty?’ which is derived from the statement ‘The man who is happy is over thirty’. When forming this question, Chomsky claims the child begins with the statement and unconsciously applies the rule: move the main auxiliary to the front of the sentence. According to Chomsky, none of this is done consciously; rather, what happens is that a mechanism in the brain which is genetically programmed will interpret the data of experience and construct a model. The mechanism will determine what the rules of the language are, using the universal principles of the language and having the parameters set through experience. When these parameters are set, we can say that the child is unconsciously following various different rules of English, such as the subject/auxiliary inversion.

Suppose we assume (falsely) that Quine would accept Chomsky’s poverty of stimulus argument and agree that rules such as structure dependence are innate and that these rules determine that ordinary humans will derive rules such as auxiliary inversion when placed in certain linguistic environments. The question we now need to answer is how would Quine characterise these facts? The key point here is that Quine would focus on performance facts; he would ask whether people are disposed to form questions in ways consistent with these rules. Answering these questions would involve studying various different corpuses and seeing how people actually form questions when talking with others, or when writing various different texts. It would also involve constructing various different controlled experiments to see whether the manner in which questions are formed varies in the circumstances of speaking/writing. What Quine would object to is the reliance of Chomsky et al. on the intuitions of others as to what is or is not an acceptable way of forming questions. Quine would claim that the salient aspect of our studies should be the behaviour of the subject, not the intuitions of acceptability or unacceptability by the subjects of various constructions.

However, the next question to ask is what Quine’s response would be if the behaviour of the subjects and the acceptability tests lined up perfectly. So Quine would have the following facts to account for[1]: poverty of stimulus considerations, subject’s intuitions of the acceptability or unacceptability of certain constructions, and subject’s behaviour in certain determinate circumstances. In this circumstance, Quine would claim that the person’s behaviour fits with any and all extensionally equivalent grammars which capture the behaviour of the subject. He would say that we are justified in claiming that the person’s behaviour conforms to a particular rule system (and other extensionally equivalent systems). He would have to object to any postulation of innate apparatus because there is no justification for postulating one rule system over another one. He would accept any innate apparatus if it was determined by behavioural facts; however, no behavioural facts will help us decide between attributing rule system (A) over rule system (B), if they are extensionally equivalent. So on this Quinean picture, the rules which we claim the behaviour of the child fits do so in the same way the behaviour of physical objects fits certain rules of physics. So rules discovered in this behaviourist manner would easily conform to Quine’s flat-footed conception of rules fitting the behaviour of the subject. There would be no need for the Quinean to postulate an intermediate type of rule which implicitly guides the subject.

There is, however a difficulty with this Quinean conception of the behaviour of the child fitting certain rules. The difficulty stems from normative issues. The Chomskian picture conforms to our pre-theoretical intuitions about language in one clear sense. It seems obvious that when I construct a new sentence, the sentence will be grammatical or ungrammatical according to the rules of the language. So, for example, most people would believe that if I construct a new sentence there will be a fact of the matter as to whether the sentence is grammatical or not. However, if we are deriving our rules of language by studying how people actually perform, then at best, all we can speak of is the probability of an utterance with a certain syntactic structure occurring or not. We will have no warrant to claim that the sentence is grammatical or ungrammatical. From a behavioural point of view, we can say that the sentence is atypical but not that it is incorrect. This seems like a serious difficulty with Quine’s conception of the nature of rules. He could reply to this concern by claiming that we could test whether the sentence was grammatical by the lights of the linguistic community by asking its members. However, this reply does not solve our problem; rather, it reintroduces the problem in a different guise. If we ask people whether this or that construction is acceptable, we are testing their intuitions in the same manner that Chomsky recommends. If people’s intuitions all agree that a certain construction is ungrammatical, but performance data indicates that the construction is used and accepted in ordinary speech, then we have arrived at an impasse. Furthermore, if people’s intuitions of acceptability disagree with their actual performance, then Quine would argue that we would have to give preference to performance data over people’s intuitions of acceptability. The problem with this is that if people’s intuitions cannot be used to help us decide between a correct or an incorrect linguistic utterance, then we seem to have no way of doing so. All we have are linguistic regularities, and constructions which are irregular from that point of view.

To clarify the above difficulty let us consider a computer. From a performance point of view, the computer exhibits certain regularities. So, for example, if I press the caps lock key when typing in Microsoft Word it results in the words I type afterwards being capitalised. When I press the caps lock key again, the computer no longer types in capital letters. Now imagine if one day I press the caps lock key and the following symbol appears on my screen *. Imagine I continued pressing the key for a while and the symbol * kept appearing. You can be quite sure that if this happened, nobody would think to themselves ‘Strange, the probability of my letters being capitalised when I press the caps lock button has just been reduced’. It is a safe bet that anybody who noticed that the caps lock key being pressed resulted in * being typed would assume that the computer was broken. This assumption would be based on the fact that we know that the computer was designed for a particular purpose which it is no longer achieving efficiently. So we would assume that some part of the computer was broken and set about getting it repaired.

Now in natural language such an approach is possible as well. Take, for example, people with severe schizophrenia, or with some form of aphasia. People with schizophrenia sometimes speak with what is known as word-salad. Such word-salad sentences sometimes exhibit syntactic, semantic and pragmatic deviance. People with some forms of aphasia are sometimes incapable of forming sentences into syntactic units. Analysing sentences from schizophrenics and aphasics is no trivial matter, and understanding the way such sentences go wrong is a flourishing field of psycholinguistics and neurolinguistics. Any theory which claimed that the speech of aphasics and schizophrenics was just statistically unlikely rather than incorrect seems to be seriously deficient. Furthermore, as opposed to the case of the computer, we cannot say in this case that such people are in error because they are behaving contrary to their designers’ intentions. Who designed English that we would be sinning against if we speak ungrammatically? Chomsky would answer that people speaking in such a deviant manner are breaking the implicit rules of universal grammar and this explains our judgements that such people are speaking ungrammatically[2]. Quine, who rejects appeal to implicit rules which govern what sentences we accept as grammatical, will need to tell a different story of how such sentences are viewed as deviant.

In the rough and ready world of ordinary discourse, Quine recognises that intentional idioms are indispensable. It is only when we are trying to limn the true and ultimate scheme of reality that such idioms have no place. It is also in the rough and ready world that people’s behaviour (including their verbal behaviour) is viewed as deviant. It is from this rough and ready vantage point that people are judged to be suffering from schizophrenia and aphasia[3]. Quine would have no problem with using this pragmatic idiom in daily discourse while using the more precise discourse of neurology and behavioural science when trying to limn the ultimate nature of reality. So in this sense the schizophrenic and aphasic objection does not in any way affect Quine’s argument.

To the objection that Quine’s flat-footed conception of rule following cannot handle normative notions like correct and incorrect grammar the same reply as above will suffice. First, Quine can point out that in ordinary discourse, when applying the dramatic idiom of intentionality, we can judge that certain statements are unclear, badly structured, pragmatically deviant, etc. However when we are limning the true and ultimate structure of reality, we can do no better than say that people are following whatever extensionally equivalent grammars can be constructed to systematise their utterances. To people such as Chomsky who claim that people follow one true grammar as opposed to any and all extensionally equivalent grammars, Quine would reply that we have in-principle no behavioural evidence which can decide between them. And without such evidence, all we are justified in positing is Quine’s flat-footed conception of rule following.

As we have already seen, Chomsky would greatly object to this characterisation because he does not think that syntactic rules can be studied in terms of performance. The important thing to note is that at this point we do not know whether a theory of syntax, semantics, etc. is possible in terms of performance. We do not know whether people’s grammatical intuitions match up with their actual linguistic behaviour, though from the empirical data to date they clearly don’t. A further difficulty stems from the fact that Chomsky does not typically offer statistical analysis of people’s grammatical intuitions, and when studies have been done[4] the result has revealed much greater variety in people’s intuitions than Chomsky would admit. So Chomsky’s confident assertions aside, we do not know whether it is possible to construct a theory of performance yet, and discovering whether this is possible or not will require empirical research, not rhetoric. This bears on the debate between Chomsky and Quine on the nature of rules. If a science of behaviour is tractable then Quine’s conception of the behaviour of various subjects fitting syntactic rules will be the most accurate way of conceiving the facts. However, if it is shown that Chomsky is correct that a science of human behaviour is impossible, then Chomsky’s conception of a third intermediate type of rule will be the correct picture.

[1] In reality, Quine does not have this collection of facts to account for because as, Chomsky’s poverty of stimulus argument is dubious, and performance data does not match up closely with competence data.

[2] Obviously, there is more to schizophrenic word-salad than ungrammatical sentences; however ,I am just focusing on ungrammaticalness because it is directly relevant to the debate between Chomsky and Quine on this particular point.

[3] The rough and ready vantage point includes the judgements of psychiatric workers, who help themselves to intentional idioms and physical idioms as is useful for their purposes.

[4] For some research on the statistical analysis of people’s grammitical intuitions see Lappin and Clark Linguistic Nativism and Poverty of Stimulus.

Stephen King: Possible Worlds and the Idealised I

“Perhaps no-one at the end of his life, if he gives the matter sober consideration, and is, at the same time frank, ever wishes to live it over again, he more readily chooses non-existence.” (Schopenhauer: ‘World as Will and Idea’ p. 204)

“This is the best of all possible worlds”. (Leibniz ‘Essays on the Goodness of God, the Freedom of Man and the Origin of Evil’)

Philosophers from Plato to the present day have been constructing grand metaphysical narratives about the nature of reality and man’s place in this grand metaphysical scheme. As well as using rational arguments, and empirical evidence to support their position philosophers have often used thought experiments to support their theories. Famous thought experiments like ‘The Allegory of The Cave’, ‘Twin Earth’ ‘The Zombie Hunch’ and the ‘Chinese Room’ are part of all undergraduates lexicon. In many ways good fiction can serve as interesting thought experiments for philosophical consumption; thus Dostoevsky’s ‘Crime and Punishment’ can be read as a thought experiment on the nature of morality in a Godless universe. Tolstoy’s ‘War and Peace’ can be read as a large thought experiment on the nature of freewill[1] . Voltaire’s ‘Candide’ is a satire of Leibniz’s claim that we live in the best of all possible worlds. Candide is a brilliant thought experiment with which we can explore whether we agree with Leibniz’s claim. Though it should be mentioned that Candide’s novel like all thought experiments should not be read at face value, we should read it and try to change certain parameters in the book to see if this effects the conclusion Candide tries to force on us. Likewise we should read Leibniz’s actually philosophical arguments closely along with reading ‘Candide’ and explore the degree to which it is an unfair caricature etc[2].

It is important to note that a novel doesn’t have to be “high brow” to be useful as a thought experiment for philosophers. Nor does the novelist need to be interested in philosophy or attacking a particular philosophical system for it to be interesting as a thought experiment. Thus, for example, George Orwell didn’t like philosophy nor did he read much of it. Nonetheless his 1984; in particular the scene where O Brien and Winston debate with each other about truth, has massive philosophical importance[3] .

All novels have potential for providing philosophical insights and the potential to at least give us some interesting premises which we can explore.  Novels provide us with possible worlds other than our own actual ones, and hence can be used to help us think through various themes and issues. So, for example, Stephen King’s science fiction novel ‘11.22.63’ raises some interesting philosophical issues about the nature of the self. The central protagonist of the novel is Jake Ebbing an English teacher. Ebbing is told of a portal that leads into 1958 by the owner of a diner called Al. Al shows Jake the portal which is in his diner, and lets Jake go through the portal. After Jake comes back through the portal Al tells Jake of his plan to stop the murder of Kennedy in 1963. After Al commits suicide Jake decides to fulfil Al’s mission and stop the assassination of Kennedy.

However; interesting and central as the plot to stop the Kennedy assassination is, I think that the relationship between Ebbing and another character Harry Dunning is even more fascinating. Early in the book Ebbing recounts how when teaching a General Educational Class he was given an essay by Harry Dunning a slightly brain damaged and crippled man. Dunning’s story ‘A day that changed my life’, was about how his alcoholic father murdered his mother and siblings with a hammer and left him brain damaged and crippled. This story had a deep effect on Ebbing even reducing him to tears.

Dunning’s story had such an influence on Ebbing that he decides to try to help him. The time portal that Al shows to Ebbing can be used to change things that happened in the past, and hence can create a different future. However, if after changing the past you step through the portal again you will reset history, your previous changes will be erased. When he returns to the past Ebbing tries to change the tragic events which led to Dunning’s family being killed and Dunning being brain damaged and crippled.

The choice of which person to save and why would be mind boggling. People die every day in various preventable ways, whether through accident, or murder etc. When Ebbing decided to change Dunning’s past he would have been moved by the tragedy of Dunning’s story, by the lost possibilities. Dunning’s siblings were wiped out of existence, any potential they had to achieve or experience something was wiped out. By changing past events Ebbing was creating new futures, new potentialities, and new destinies.

Throughout the novel Ebbing managed to save Dunning from his father a number of times. But reality being what it is there are no happy endings[4] to be had. Dunning is saved from his father only to be killed in the Vietnam War in one reality. In the other reality Dunning is saved from his father only to end up living in a post apocalyptic world. Dunning it seems can’t catch a break. In the novel things are such that we don’t get to try out endless possibilities to see if we can find a perfect ending for Dunning. The question raised by Dunning’s alternative realities is: is there a possible world in which Dunning gets his happy ending?

Arthur Schopenhauer in his ‘World and Will and Idea’ argued that the nature of existence is such that all creatures are destined to live a miserable existence. The world is full of living things who to need to feed off each other. Every creature by virtue of existing is taking up space and resources from another creature. We can try as hard as we like to decrease suffering, but our very survival depends on killing and eating other living creatures. No matter how hard we strive to achieve happiness our bodies eventually decay and breakdown and die, and the same thing happens to everyone we will ever know and love. Schopenhauer notes that all our fairy stories have a similar structure.  Heroes are challenged by Evil Queen’s, Wicked Witches, Monstrous Kings etc; the heroes face challenge after challenge and eventually overcome them. Schopenhauer notes that these fairy stories all end with the claim ‘They all lived happily ever after’. None of the details of these happy endings are ever provided, and the reason according to Schopenhauer[5], is that even children would recognise the absurdity if they had to read a description of a perfect ending. Such things do not exist.

Now I am too much of a pragmatist to buy into any grand metaphysical, whether optimistic like Leibniz, or pessimistic like Schopenhauer.  I think the best we can do is to try and cope with the flux of experience as best as we can. However I do think that Schopenhauer has a point to some degree. Any course of action we take will have consequences for other living creatures and either pro or con. This is why in the clichéd science fiction dialogue one protagonist reminds another that any changes they make can have dire consequences. I want to briefly discuss a possible dire consequence of Ebbing changing Dunning’s past: in a sense Ebbing is killing Dunning.

Journalist Brendon O Connor has a daughter Mary who has Down syndrome; in 2013 he wrote an interesting article ‘Would ‘fixing’ our child with Downs’s mean we’d be given back a stranger.’ In his interesting article he noted that we do things all of the time to improve those we love, we give them glasses to improve eyesight, send them to behavioural specialists to improve their ability to learn better. But he worried that if his child was cured of Downs she would be in effect a different person. Such a cure, if it were possible, would in effect change his child into a different person. Now in a sense Brendon’s sensitive and well thought out views are to some degree misplaced. Any cure would not be a miracle overnight cure. The child would still have suffered developmental delay and would slowly have to learn to think differently once the Downs is cured. The slow process of the child learning once the Downs was cured would mean that Brendon would not feel he was losing his daughter. He would rather just think she began to look and think differently.

The case of Dunning would be different. He would have his entire life erased. Without the brain damage and the traumatic experience of seeing his family killed Frank would have become an entirely different person. The brain is an incredibly complex organism, the brain injury Dunning suffered, and incredibly traumatic childhood experiences he had would have had a huge effect on who he became. By changing that Ebbing was in effect killing Dunning. But most people would argue; so what? Frank was damaged goods. Killing this damaged person is a good thing if it makes room for a better person[6].

Ebbing’s belief that he is saving Dunning is understandable. He is going to make Dunning physically and cognitively superior and prevent him from experiencing a terrible trauma. However there is reason to doubt that changing Frank’s past is for the good. Any life anybody has is going to have a lot of trauma. No matter what way the chips fall we will all suffer. But surely the amount of suffering we experience differs depending on our environment. On average a person living in a severe famine in a war torn area will have less happiness than a person living in an affluent peaceful area. Similar things could be said of Dunning surely he would have been happier if his family were not slaughtered on front of him by his father. We can take this as axiomatic. Right? Well actually we simply don’t have enough data about how this one event affected the world that Dunning lived in to make any claims about how it would have affected his overall happiness. Maybe one of his sisters would have turned out to have been worse than his father. The truth is no matter what our intuitions tell us we cannot be sure how this would have affected his overall happiness.

The same is true about his physical injury. Can we say for sure that without it he would have been happier? Who knows? In the book without the injury he ends up going to war and dying. When it comes to his cognitive abilities again it is hard to be sure. There is little empirical evidence I know of to say that people with higher IQ’s are happier than those with lower IQ’s[7]. Is wiping brain damaged Dunning out of existence without any clear empirical evidence on the issue justified? Ebbing doesn’t really feel the need to look for the evidence he just assumed that brain damaged Dunning’s life was terrible and needed to be changed.

To return to Brendon O Connor; in an interview on The Late Late Show he noted how he went through a process of mourning when his daughter was diagnosed with Downs. He had an implicit idealised image of who his daughter would become. It took a long process of mourning to cope with the fact that his daughter would not go through the typical developmental milestones that other children do. Apparently this is a common phenomenon for parents. But once they get to know their child they learn to love the actual person as opposed to some illusory idealised fiction.

Lacan talks about children going through a mirror phase where they recognise themselves in a mirror and note the stable nature of the image which they oppose to their own experiences of their bodies which they have limited control of. He speculates that children who go through this phase begin identifying with this image and this continues through to adulthood. As developmental psychology there is little reason to take Lacan seriously. But he does have a point about the idealised I. We all seem to have a kind of fictional perfect world which we and those around us are imperfect exemplars of. Ebbing used this implicit belief in an idealised I as justification for fixing Dunning. But he had little justification to do so. He had no real idea of who Dunning was. His beliefs were an unverified assumption. I think we should take King’s novel as a cautionary tale against assuming that some life is an imperfect exemplar of some perfect life they were denied. There are no perfect lives; no happy endings just people trying to cope with the flux of experience as best as they can[8].

[1] Obviously the brilliance of the above mentioned books goes beyond any use they have as philosophical thought experiments.

[2] Candide because of its narrow focus on refuting Leibniz is actually a pretty poor novel with one-dimensional characters silly plot twists. But it works as a caricature of Leibniz. To this degree it is not really good art. Dickens’ ‘Hard Times’ manages to both attack a philosophical view Utilitarianism and tell a brilliant human story; not an easy thing to manage.

[3] For discussion of some of these issues see Pinker’s ‘The Language Instinct’ and ‘The Blank Slate’ as well as Rorty’s ‘Contingency, Irony and Solidarity’

[4] Shout out to the excellent children’s programme ‘Once Upon a Time’ which explores the issue of happy endings in an easy going manner.

[5][5] David Berman discusses Schopenhauer and Fairy Tales in the introduction to the everyman edition of ‘World as Will and Idea’.

[6] It could be claimed that saving Dunning isn’t the only motivation for Ebbing he is also saving his siblings. But this is not Ebbing’s primary motivation. And furthermore Ebbing has no idea of whether saving Dunnings siblings will lead to a better or worse world.

[7] To think through this premise I would recommend reading ‘Flowers for Algermon’ a science fiction story about a child with intellectual disabilities who has his IQ radically raised. The jump in IQ isn’t followed by a jump in happiness. A novel is no proof on the issue each way but it can be a spur to thinking about the issue and may inspire doing some empirical research on it.

[8] Note a parent who tries to cure their child of some disability is not making the same mistake as Ebbing. They don’t need to destroy anyone when using a cure. They are just trying to help their child cope with the flux of experience in the best manner they know how.

MINDS WITHOUT MEANINGS AND NEUROPHILOSOPHY

While reading Fodor and Pylyshyn’s[1] recent book ‘Minds Without Meanings’, my mind was constantly drawn back to Paul Churchland’s (2012) book ‘Plato’s Camera’. Fodor and Churchland are philosophers, while Pylyshyn is a cognitive scientist. Despite the fact that Churchland and Fodor are philosophers their respective books chocked full of empirical data, experimental results and as well as theoretical claims. Churchland as well as F and P consider their theories to be empirical theories subject to empirical refutation like any other theory in psychology and neuroscience. Churchland draws his data primarily from neuroscientific evidence while F and P primarily use data drawn from cognitive science. In this blog I will consider some key areas of disagreement between F and P and Churchland and reflect on which theory best accounts for the data and propose some further experiments which will help us decide between the respective theorist’s views on mind and meaning.

F and P begin their book by outlining nine working assumptions that they make in their book (1) They are realist about Belief/Desire psychology, (2) They are Naturalists, (3) They accept the Type/Token distinction (4) They accept Psychological reality about linguistic posits (5) They assume propositions have compositional structure (6) Mental Representations have compositional structure (7) They accept the Representational Theory of the Mind (8) They accept the computational theory of the mind. (9) They argue that thought is prior to language.

There are a lot of objections that could be made to any of the above assumptions I laid out where I stand in relation to all of them in my last blog. Here I will discuss some assumptions that Churchland objects to. One area where he has always disagreed with Fodor is on the language of thought thesis; as a result, he would have serious issues with assumptions 1 and 9 above. Here is Churchland on the Language of Thought argument:

“For now, let me announce that, for better or for worse, the view to be explored and developed in this book is diametrically opposed to the view that humans are capable of cognition precisely because we are born with an innate ‘language of thought’. Fodor has defended this linguaformal view most trenchantly and resourcefully in recent decades, but of course the general idea goes back at least to Kant and Descartes. My own hypothesis is that all three of these acute gentlemen have been falsely taken in by what was, until recently, the only example of a systematic representational system available to human experience, namely, human language. Encouraged further by our own dearly beloved Folk Psychology, they have wrongly read back into the objective phenomenon of cognition-in-general a historically accidental structure that is idiosyncratic to a single species of animal (namely, humans), and which is of profoundly secondary importance even there. We do of course use language-a most blessed development we shall explore in due course-but language like structures do not embody the basic machinery of cognition. Evidently they do not do so for animals and not for humans either, because human neuronal machinery, overall differs from that of other animals in various small degrees, but not in fundamental kind.” (Paul Churchland ‘Plato’s Camera’ p.5)

“As noted earlier, Jerry Fodor is the lucid, forthright, and prototype perpetrator on this particular score, for his theory of cognitive activity is that it is explicitly language like from its inception (see, e.g. Fodor 1975)-a view that fails to capture anything of the very different, sublinguistic styles of representation and computation revealed to us by the empirical neurosciences and by artificial neuromodeling. Those styles go wholly unacknowledged. This would be failure enough. However, the ‘Language of Thought’ hypothesis fails in a second monumental respect, this time ironically by undervaluing the importance of language. Specifically, it fails to acknowledge the extraordinary cognitive novelty that the invention of language represents, and the degree to which it has launched humankind on an intellectual trajectory that is impossible for creatures denied the benefits of that innovation, that is to creatures confined only to the first and second level of learning. (ibid p.26)

Some of Churchland’s reasoning above unfair to F and P. So, for example, he argues that animals and babies neuronal activity doesn’t differ from ours to that great a degree. From this fact he concludes that since neither animals nor babies have language but share a lot of neural machinery with us there seems little point in assuming that humans primary thought processes involve a language of thought. But this is just a question begging assumption. We have reason to believe that children as young as four months have concepts of objects, there is evidence that children before learning their first words at twelve months have expectations of causality, number etc.[2]. There is also evidence that animals think using concepts, in fact it is a working assumption by most cognitive scientists that they do[3]. The fact that humans and animals and non-human babies share a lot of neural architecture is in principle neutral on the issue of whether normal human adults think using a language of thought. Nothing in F and P’s work precludes the possibility that animals and babies who have not yet acquired a public language are not thinking in concepts using a proto-language of thought. The degree to which pre-linguistic children and non-human animals have concepts and whether they can combine these concepts to think is an open empirical question[4]. So given the open ended empirical nature of the debate, Churchland cannot just assume that because normal language speaking humans having a lot of neural tissue in common with children and non-human animals, this has any bearing on the debate on the existence of a language of thought. The fact that he does make this move is simply evidence of him question begging against F and P.

Churchland also uses another argument that he thinks shows that Fodor’s LoT is false. He argues by claiming that our thinking is done through our language of thought Fodor is ignoring the incredible cognitive benefits that is conferred on our species through having a language. This argument simply doesn’t work. Fodor believes that our public language is derived from our private language of thought but from this fact it doesn’t follow that a public language has negligible importance. While a private language of thought will give us the ability to combine concepts in productive manner, giving us the ability to think a potentially infinite amount of thoughts, this solitary mode of thinking has its limits. When we have a public language we have not only our own thoughts to rely on but the thoughts of others. A creature who is born into a particular culture will inherit the combined wisdom of the society he is born into. If the culture keeps written records then the child will eventually be able to read the thoughts and experiences of people long dead who have lived in different places at different times. By sharing a language it makes it easier for members of a culture to explain to another person how to do various things, and this will have huge benefits. So a shared language with the ability to share information will have huge cognitive benefits, and nothing F and P have said denies this fact. So again Churchland’s attack hits the wrong target.

Churchland goes on to make further claims about the LoT hypothesis:

“But I am here making a rather more contentious claim, as will be seen by drawing a further contrast with Fodor’s picture of human cognition. On the Language of Thought (LoT) hypothesis, the lexicon of any public language inherits its meanings directly from the meanings of the innate concepts of each individual’s innate LoT. Those concepts derive their meanings, in turn, from the innate set of causal sensitivities they bear to various ‘detectable’ features of the environment. And finally, those sensitivities are fixed in the human genome, according to this view, having been shaped by many millions of years of biological evolution. Accordingly, every normal human at whatever stage of cultural evolution, is doomed to share the same conceptual framework as any other human, a framework that the current public language is doomed to reflect. Cultural evolution may therefore add to that genetic heritage, perhaps considerably, but it cannot undermine or supersede it. The primary core of our comprehensive conception of the world is firmly nailed to the human genome, and it will not change until the human genome has changed.

I disagree. The lexicon of a public language gets its meanings not from its reflection of an innate LoT, but from the framework broadly accepted or culturally entrenched sentences in which they figure, and by the patterns of inferential behaviour made normative thereby. Indeed, the sublinguistic categories that structure any individual’s thought processes are shaped, to a significant degree, by the official structure of the ambient language in which she was raised, not the other way around” (ibid p.28)

I have to admit that I share Churchland’s scepticism about Fodor’s idea that all our concepts are innate. The conclusion on the face of it seems to be incredible. In fact the conclusion is so incredible that the majority of theorists have simply rejected the argument outright. So, for example, Churchland doesn’t say what he thinks is wrong with Fodor’s argument. Rather he simply states that he doesn’t accept Fodor’s conclusions and thinks that concepts get their meaning publically through our shared culture and developmental history.

Before assessing Churchland’s alternative it is important to consider what evidence Fodor has to support his views on concepts. His ‘Mind’s without Meanings’ is his most recent attempt to explicate his views on concepts so it is worth working through the arguments sketched there. In their ‘Minds without Meanings’, F and P argue, that current views on the nature of concepts are radically wrongheaded, they dedicate an entire chapter to showing that all other theories on the nature of concepts fail.

They begin by arguing that concepts are not mental images and give the following four reasons: (1) We have many concepts that apply to things that we cannot picture, (2) Black Swan Arguments: We can have an image of (A) a black swan, but what about an image (B) that shows that all swans are white? Or an image (C) That shows that (A) and (B) are incompatible with each other? This is not possible because images cannot depict incompatibility. But we do have conceptual knowledge of incompatibility. Therefore images are not concepts. (3) Constituent Structure: Images have parts not constituents. If we take an image we can divide it up in as many different ways as possible and put it back together in any arbitrary way. Concepts however are combined according to rules. They have a syntax that governs how they can be put together. Pictures do not follow rules like this therefore they are not concepts. (4) Leibniz Law: Mental images supposedly occur in the brain (where else could they occur), but they cannot be identical with any brain area because they have properties that no brain area has. We can have a mental image of a purple cow, but there is no purple in the brain. Therefore upon pain of breaking Leibniz’s Law we have to admit that mental images are not brain states. But unless we want to become dualists (which contradicts F and P’s naturalist assumption above) we have to argue that mental images are really only something that seems to exist, in reality they are propositional at base. Therefore since mental images don’t really exist they cannot be concepts.

Secondly they argue that concepts are not definitions: Because (1) we have very few definitions of any concepts after over two thousand years of philosophers looking for them, (2) All concepts cannot have definitions logically some of them must be primitive concepts which the others are defined in terms of but we have no way of finding out what these primitive concepts are. Some people argue that the primitive concepts are abstract innate concepts like causality, agent, object etc. However F and P argue that these supposed primitives can be broken down further so are not really primitive see (Spelke on Objects 1990) the other approach is to say the primitive concepts are sensory concepts. However there are few concepts that can be explicated in terms of sensory primitives. (3) Fodor’s Paradox: If concepts were definitions we could not learn any concepts. Take the definition ‘Bachelors are unmarried men’ this means that the concept BACHELOR is the same as the concept UNMARRIEDMAN. So to learn BACHELOR is to learn that bachelors are unmarried men. But BACHELOR and UNMARRIEDMAN are the very same concept. So it follows that you cannot learn the concept of BACHELOR unless you already have the concept of UNMARRIEDMAN (and vice versa). Therefore you cannot learn the concept at all. So something is obviously radically wrong with the definition story.

Thirdly they argue that concepts are not Stereotypes because: Concepts compose but stereotypes do not. Therefore concepts are not stereotypes. They explicate this with their famous PetFish examples, and Uncat examples.

Fourthly they argue that concepts cannot be inferential roles. They claim that if concepts are inferential roles then we need to be able to say which conceptual content supervenes on which inferential roles. However they note that there are really only two ways of sorting doing this. (1) By appealing to Holism: But they argue that if holism is true and every inference that a concept is involved in is constitutive then, then the content of ones concepts alter as fast as ones beliefs do (Minds Without Meaning p. 55). But they note batty consequences follow from this theory. So, for example, because two people may agree in some judgement about concept x at time t1 but at t2 as they have both had their concepts modified because of new information they no longer even share the same concept. If people have their concepts modified because of their own idiosyncratic experience occurring moment to moment then this would make communication very difficult (2) By appealing to Analyticity: But we have good Quinean reasons to think that appeal to analyticity is a bad way of reasoning, because we cannot explicate analyticity in a non-circular manner etc.

The fifth reason is directly relevant to Churchland. F and P do not think that concepts can be explicated in terms of connectionist models. F and P have criticised connectionist models in detail in their 1988 paper. In ‘Mind’s Without Meanings’ they give an abbreviated version of their 1988 argument. Firstly they note that from the start connectionist models face serious difficulties, because the distinction between thoughts and concepts are not often noted in the literature. For them a thought is a mental representation that expresses a proposition (ibid p. 47). So on an associationist model WATER may be associated primarily with WET. But, they argue, it wouldn’t be right to equate having the thought ‘Water is Wet’ with associating the concept WATER with the concept WET. This is because the thought that ‘Water is Wet’ has logical form where we predicate the property of wetness to the object water. Once we use this predication we are making a claim that is true or false. The claim is true if the stuff ‘Water’ has the property associated with it re ‘Wetness’, it is false otherwise. The thought ‘Water is wet’ has logical form and is made true or false by things in the world. So even if concepts are associative nodes with in a semantic network on F and P’s model concepts are distinct from thought. And connectionist models cannot explain thought even if they can explain concepts.

However despite the fact that they think that connectionist models are in principle incapable of explaining what thought they agree to bracket this consideration. They then consider the question of whether connectionist models can explain what concepts are. Again they argue that they cannot.

They ask us to think of our total set of concepts as something that can be represented in a graph of finitely labelled nodes with paths connecting some of them to some others (ibid p.49). However there is a severe difficulty with this approach. For the connectionist the content of the concepts whether it is a concept of a dog is provided by the label. But the connectionist model is supposed to explain what the concept, and it cannot do this by relying on labels on pain of circularity. F and P note that Quine is right that most theories of meaning suffer from a serious circularity problem. But if a connectionist wants to explain conceptual content without question begging then he will need another approach.

F and P argue if we cannot on pain of circularity equate the content of a node its labels then we must say that the content is simply provided by nodes and their various connections to other nodes. But there is a problem with this approach. It means that corresponding nodes in isomorphic graphs have the same content whatever the labels of their connected nodes may be (ibid p.50). This project cannot work because it means that we could have the concept of SQUARE in one graph that has the same location as the concept ROUND has in another graph. So for F and P this argument shows that connectionist models are incapable in principle of explaining what the content of different concepts are; hence they cannot explain what concepts are.

They note that Paul Churchland (2006) tries to get over this difficulty by arguing that basic level concepts are grounded in sensory perception and that is how they get their content. This approach though won’t work because it is vulnerable to the same objections that Berkeley raised against concepts being associated with sense data.

This long detour through 5 main objections that F and P make to various different theories of conceptual content shows why Fodor originally argued that our concepts must be innate. For him concepts cannot be explained in terms of mental images which are faint impressions of our sensory experiences. They cannot be explained as definitions derived from basic sensory primitives or definitions derived from innate metaphysical concepts of causation, agency etc. They cannot be explained as something that we derive from proto-types and statistical generalisation. They cannot be explained as something derived from inferential roles. And they cannot be explained via connectionist models. Therefore if we have no other explanation of how concepts are acquired and we think people have concepts we will be forced to conclude that our concepts must be innate.

F and P don’t argue that we can avoid paradoxes about how concepts are learned (and by extension claiming that all concepts are innate) if we stop thinking of concepts as something that have intensions. Hence they sketch their purely referential theory of concepts, arguing that intuitions to the contrary this approach is viable.

I discussed Fodor’s objections to various different theories of concepts in my last blog. In a nutshell I think that he is right in his criticism of concepts as mental images. But that his arguments against concepts being prototypes badly misconstrues Eleanor Rosch’s prototype theory. I think that Fodor’s arguments against concepts being definitions is pretty convincing. But that his argument against inferential role semantics is very weak. Nonetheless here I just want to discuss Churchland’s (2012) objections to Fodor’s views on concepts.

Churchland argues that our lexicon is largely determined by our public language that we have learned in the idiosyncratic environment that we happen to have been born into. Any limited pre-linguistic concepts that we have will be to an extent over-written by the conceptual abilities that the particular culture we are born into gives us. So he disagrees with what he takes to be Fodor’s position that all our concepts are written into our genome and cannot be radically changed by the culture we are born into. It is unclear that F and P need to accept Fodor’s old argument for innate concepts. Since they now think our concepts are determined by our extensions entirely and that intensions play no role. However their views are still at odds with Churchland’s claim that the lexicon of our public language will radically modify our concepts which we use when thinking. For F and P since our concepts are determined by their extensions, our public language should not really affect the concepts we use to think about the world.

In the above discussion Churchland claimed that Fodor’s of concepts was incorrect. However he did not engage with Fodor’s arguments that there is no other way we can acquire concepts other than them being innate or being determined by their extensions. Obviously; the key argument that Churchland would object to, is F and P’s claim that concepts cannot be explicated in terms of connectionist models. Churchland has criticised both Fodor and Pylyshyn for what he views as their inadequate views on the nature of connectionist models:

“Fodor briefly turns to address, with more than a little scepticism, the prospects for a specifically ‘connectionist’ solution to his problem, but his discussion is hobbled by an out dated and stick-figured conception of how neural networks function, in both their representational and in their computational activities. His own reluctant summary (Fodor 2000, 46-50) wrongly makes localist coding (where each individual cell possesses a pro-prietary semantic significance) prototypical of this approach, instead of population or vector coding (where semantic significance resides only in the collective activation patterns across large groups of cells). And it wrongly assimilates their computational activities to the working out of ‘associations’ of various strengths between the localist-coded cells that they contain, instead of the very different business of transforming large vectors into other large vectors. (To be fair to Fodor, there have been artificial networks of exactly the kind he describes: Rumelhart’s now ancient ‘past-tense network’ [Rumelhart and McClelland 1986] may have been his introductory and still-dominant conceptual prototype. But that network was functionally inspired to solve a narrowly linguistic problem, rather than biologically inspired to address cognition in general. It in no way represents the mainstream approaches of current neuroanatomically inspired connectionist research). Given Fodor’s peculiar target, his critique is actually correct. But his target on this occasion is, as it happens a straw man. An in the meantime, vector-coding, vector-transforming feed-forward networks-both biological and artificial-chronically perform globally sensitive abductions as naturally and as effortlessly as a baby breathes in and out.” (ibid p.71)

“I here emphasize this fundamental dissociation, between the traditional semantic account of classical empiricism and the account held out to us by a network-embodied Domain-Portrayal Semantics, not just because I wish to criticize, and reject the former. The dissociation is worth emphasizing because the latter has been mistakenly, and quite wrongly, assimilated to the former by important authors in the recent literature (e.g. Fodor and Leopore 1992, 1999). A state-space or domain-portrayal semantics is there characterized as just a high-tech, vector-space version of Hume’s old concept empiricism. This is a major failure of comprehension, and it does nothing to advance the invaluable debate over the virtues and vices of the competing contemporary approaches to semantic theory. To fix permanently in mind the contrast here underscored, we need only to note that Hume’s semantic theory is irredeemably atomistic (simple concepts get their meanings one by one), while domain-portrayal semantics is irreducibly holistic (there are no ‘simple’ concepts, and concepts get their meanings only as a corporate body). Any attempt to portray the latter as just one version of the former will result in nothing but confusion” (ibid p.88)

In the above quote Churchland is explicating a discussion of the Frame Problem by Fodor in his ‘The Mind Doesn’t Work That Way’. In that book Fodor was criticising attempts of connectionist models to overcome the frame problem. Churchland complains that Fodor is wrong in his interpretation of connectionism because he is working from antiquated models. Churchland has a point; Fodor has an irritating habit of associating any theory that disagrees with his own as another form of empiricism. Nonetheless, Fodor’s misunderstanding has no real bearing on his criticism of connectionist models of concepts. We still don’t have an answer to how concepts get their contents in connectionist models.

However whatever we make of F and P’s criticisms of connectionist theories of concept content, Churchland has another argument against their views of concepts being entirely determined by their extensions:

“Before leaving this point, let me emphasize that this is not just another argument for semantic holism. The present argument is aimed squarely at Fodor’s atomism in particular, in that the very kinds of causal/informational connections that he deems necessary to meaning are in general impossible, save as they are made possible by the grace of the accumulated knit of background knowledge deemed essential to meaning by the semantic holist. That alone is what makes subtle, complex, and deeply context dependent features perceptually discriminable by any cognitive system. Indeed it is worth suggesting that the selection pressures to produce these ever more penetrating context-dependent discriminative responses to the environment are precisely what drove the evolutionary development of multi-layered networks and higher cognitive processes in the first place. Without such well-informed discriminative processes thus lifted into place, we would all be stuck at the cognitive level of the mindless mercury column in the thoughtless thermometer and the uncomprehending needle position of the vacant voltmeter.

Beyond that trivial level, therefore, we should adopt it as a (pre-revolutionary principle that there can be “No Representation without at least some comprehension… In sum, no cognitive system could ever possess the intricate kinds of causal or informational sensitivities variously deemed necessary by atomistic semantic theories, save by virtue of its possessing some systematic grasp of the world’s categorical/causal structure. The embedding network of presumptive general information so central to semantic holism is not the post facto ‘luxury’ it is on Fodor’s approach. It is epistemologically essential to any discriminative system above the level of an earthworm.” (ibid p.97)

The very points that Churchland makes above are addressed in chapter 4 and 5 of F and P’s ‘Minds without Meanings’. This part of a dispute is entirely and empirical one and I will address it in my next blog.

[1] Fodor and Pylyshyn will be referred to as F and P throughout this blog.

[2] For evidence of children’s conceptual abilities pre-learning a public language see Spelke 1990, Carey 2009, and Bloom 2000.

[3] For an excellent discussion of animal cognition and whether animals have concepts see Kristin Andrews ‘The Animal Mind: An Introduction to the Philosophy of Animal Cognition’ (2014)

[4] I discussed this question in my blog-post ‘What are Concepts and which Creatures have them?’

Meditation and The Philosophy of the Self

Can meditation reveal to us real knowledge about the nature of the self? In his recent book ‘Waking Up’ Sam Harris argued that indeed it could.  When reading Harris’s well written little book, I was in the position of a person being told about a land he had never visited and who had to take the word of his guide. So when Harris claimed that experience x or y was possible when meditating, I had to take his word for it about the experiences, as I have only done limited meditation. Harris’s book was very informative and I learned a lot from it. Nonetheless I had some problems with aspects of it from philosophical perspective.
One problem I had with the book was that he wrongly claimed that Dan Dennett and Paul Churchland are eliminativists about consciousness. Dennett has long argued that he is not claiming that consciousness does not exist rather he arguing that consciousness exists but it is just not what you think it is. Paul Churchland never ever claimed that consciousness does not exist. Harris knows both thinkers deny being eliminativists about consciousness so if he is going to claim that they are he should have at least provided some textual evidence to support his claim.  His bald assertion that they are eliminativists is not supported by any evidence.
The reason Harris accused Churchland and Dennett of being eliminativists was because he wanted to create a kind of forced choice. Either you deny consciousness exists or you admit that it exists and cannot be explained by current and (possibly any future science). But I don’t think we need this forced choice. Dennett has worked hard to give us an alternative to this view and I think that if Harris wanted to demonstrate that this view is insufficient he would need to engage with it seriously and not simply caricature it. I personally am coming around to the view that there is a hard problem of consciousness and that Dennett has not explained this issue away. However, Harris didn’t engage with his opponents on this issue at all.

A particularly weak aspect of ‘Waking Up’ was Sam Harris’s claim that we could prove the non existence of the self through meditation. I have long thought of the self as a theoretical fiction;  like Dennett I just think of the self as a centre of descriptive gravity. Harris cites the work of Hume and Parfit as presenting good reasons to be sceptical about the existence of the self. However he goes on to argue that if a person meditates they will have direct experience of the non-existence of the self. He even argues that this direct experience is better evidence than the theoretical arguments of Hume and Parfit. I am sceptical about drawing large scale philosophical conclusions from direct experiences in meditation. Harris seems to agree with me on the point that we shouldn’t draw large scale metaphysical conclusions from meditation but on the issue of the self, but he ignores his own advice and uses his direct experience in meditation to make a controversial philosophical claim.
Philosophers like Locke and Berkeley used to engage in disputes about mental faculties based their direct introspective experiences. Introspective psychology discovered that some people have different mental capacities than others. Some are eidetic images, some can only form vague mental images, while some people are non-imagers (See Galton and William James for statistical introspective evidence, for neuroscientific and behavioural evidence see Kosslyn et al (2006)). Now when Harris argues that he has direct experience of the non-existence of the self he has no evidence other than his own subjective impressions. To support his conclusions he needs to show that most people meditating have the same experiences of the non-existence of the self as him and show that those who don’t have these experiences are meditating incorrectly. Furthermore when people like Descartes, and Berkeley argue that they have a strong experience of the self through direct introspection Harris needs to provide evidence that his experiences in meditation are more valid than their experiences when directly introspecting. If a person doing phenomenological analysis (or introspective psychology) draws a different conclusion from their analysis about the nature of the self how can Harris show that they are mistaken through his direct experiences in meditation? It seems to me he cannot. His claims about the nature of experience of people who meditate are subject to the same objections about claims raised about introspection.

If a person claims that upon close introspection they experience the self (Descartes, Berkeley), and another person claims on close introspection they do not experience the self (Hume) you are at an impasse. It takes philosophical analysis and third person science to decide the issue. Likewise if Harris claims that the self doesn’t exist and he knows this from direct observation in meditating and another spiritualist claims they directly experience the self when in deep meditation we have no way of deciding the issue.  Unless he can overcome these problems Harris must accept that he wrong to draw metaphysical conclusions from his experiences meditating.

Consider Descartes and James differing conceptions of the mind:
“When I consider the mind—i.e. consider myself purely as a thinking thing—I can’t detect any parts within myself; I understand myself to be something single and complete. The whole mind seems to be united to the whole body, ·but not by a uniting of parts to parts, because if a foot or arm or any other part of the body is cut off, nothing is thereby taken away from the mind. As for the faculties of willing, understanding, of sensory perception and so on, these are not parts of the mind, since it is one and the same mind that wills, understands and perceives, They are (I repeat) not parts of the mind, because they are properties or powers of it. (Descartes: Meditations on First Philosophy p. 11)”
Let the case be what it may be for others, I am as confident as I am of anything that, in myself, the stream of thinking (which I emphatically recognize as a phenomenon) is only a careless name for what, when scrutinized reveals itself to consist of the stream of my breathing. (James 1904)
There is I mean no aboriginal stuff or quality of being, contrasted with that of which material objects are made, out of which our thoughts of them are made, but there is a function in experience which thoughts perform…(namely)…knowing. Consciousness is supposed necessary to explain the fact that things not only are, but get reported, are known. (James: 1904 p.101)
Everyone assumes that we have direct introspective acquaintance with our thinking activity as such, with our consciousness as something inward contrasted with the outer objects which it knows. Yet I must confess that for my part I cannot feel sure of that conclusion. Whenever I try to become sensible of my thinking activity as such, what I catch is some bodily fact an impression coming from my brow, or head, or throat or nose. It seems as if consciousness as an inner activity were rather a postulate than a sensibly given fact. (James: 1892: Text Book of Psychology p. 467)

Both thinkers describe their experience of consciousness and thinking differently. From a first person point of view they can say it as they are entitled to do this. The problem comes when they try to generalise their own experiences and claim that all people must experience the world in the same way.  Both thinkers could accuse the other of not introspecting properly. But there seems little we can to resolve the issue. Perhaps both have different types of consciousness or one of them is in the grip of a theory which warps what he experiences. To decide the issue we need to move beyond first person science and use third person science and thought experiments. The exact same situation arises for Sam Harris and his claims about directly experiencing the non-existence of the self. At best he can say is that this is how things seemed to him and he is pretty certain that he is correct. But his subjective reports give him no licence to pontificate about the non-existence of the self for others. Real analysis of whether the self exists or not; relies on philosophical analysis, and third person science, not on the subjective intuitions of people meditating.

Fodor and Pylyshyn: Three Minor Quibbles

Some brief notes on Fodor and Pylyshyn (2015) Some assumptions and where I agree or disagree with them:

(1)They assume the truth of belief/desire psychology and think that all branches of behaviourism are false. (I do not agree with this assumption and think that eliminative materialism may turn out to be true about the P/A. I think the proper course of action is to wait and see the truth is we don’t know the answer on this one yet.)

(2) They assume the truth of Naturalism. (I agree with this assumption)

(3) They assume the truth of the type/token distinction. (I agree with this assumption)

(4) They assume that grammatical discoveries have psychological reality. (I think that the grammatical discoveries in linguistics have psychological reality and are implemented by the brain so I accept their assumption)

(5) They assume that propositions have compositional structure. (I accept this assumption)

(6) They assume that mental representations have compositional structure. (I accept this assumption in so far as I agree with the RTM. But I have my doubts about the RTM)

(7) They assume that the Representational Theory of the Mind is true. (I am not sold on the RTM. I haven’t made up my mind yet but I am working towards a theory of embodied pragmatism which is anti RTM but I don’t think we can say for sure whether the RTM will remain true in a completed science of the mind/brain. My guess at this early stage is that the RTM is false.)

(8) They assume that the Computational Theory of the Mind is true. (I agree with this if the CTM includes connectionist models, and Bayesian models. I agree with Fodor that the CTM as traditionally conceived (Fodor’s isolated modules) cannot (and probably never will) handle things like global abduction. That is why I am don’t understand why Fodor and Pylyshyn don’t use connectionist models (they are wrong that connectionist models cannot handle compositionality see Chalmers [insert ref].

On another note while I accept the CTM (suitably modified) I think these models have to stop idealizing away from an organisms need to supply its own energy to fight off entropy (See Deacon ‘Incomplete Nature’). I think the idealisation inherent in some CTM models results in wildly inaccurate models. So while I accept the CTM I probably don’t do so in any sense that Fodor and Pylyshyn would agree with.

(9) They assume that Thought has priority to Language. (This is a complex one. I don’t accept the view that people always think in an internal Language of Thought. But strictly speaking this isn’t what Fodor and Pylyshyn asked. They are asking whether children develop thought prior to acquiring language? This is a massive area. I agree that thought is prior to language and not necessary for language. But I am not nearly as confident about the issue as Fodor and Pylyshyn are. )

So 1 the assumption that Propositional Attitude psychology must be maintained is the only one of their assumptions I disagree with. I don’t even say for certain that this assumption is false. I merely think that we don’t know whether it is true; and more research is needed until we can decide on the issue. Until such research in concluded I prefer to adopt Dennett’s intentional stance approach as opposed to making a-priori assumptions as Fodor and Pylyshyn do. I agree with them on assumptions 2, 3, 4, and 5. I agree with them on 8 and 9 though I am not as confident of my belief on the topic as they are. And in the way I interpret 8 they would probably have serious difficulties with me. I agree with 6 to the extent that I accept the RTM and I am not sure to what extent I accept the RTM. I think 7 is probably false but I am withholding judgement till I learn more.

I will examine how my differing assumptions relates to my difficulties with their central argument later on for now I want to note some quibbles I have with some claims they make in the preliminary part of their book.

First quibble:

Fodor and Pylyshyn engage in the grand old Chomskian tradition of dismissing scientific theories while presenting little justification. Thus we get the following:

“To begin with, we take it that behaviourism is false root and branch; in the paradigm cases, behaviour is the effect of mental causes… Wittgenstein and Skinner both held that the first-language acquisition is somehow the effect of “training” (“socially mediated reinforcement”, or the like) But that cannot be right either, since it turns out that children generally neither need nor get much language training in the course of learning a first language. Still more to the point, there is (as far as we know) no serious suggestion anywhere in either psychology or philosophy as to how training might work its putative effects…The thesis that (accepting, perhaps, occasional reflexes) new-borns are cognitively inert has become increasingly unattractive since the collapse of the Piagetian program in developmental psychology…” (Fodor and Pylyshyn: ‘Minds Without Meanings’ pp. 2-15)

This sort of hubris has become standard in cognitive science. Chomsky is the master of dismissing all psychologists, philosophers or linguists who disagree with him. Fodor and Pylyshyn show that they are more than a match for Chomsky when it comes to the rhetoric of dismissing the work of rivals. On the issue of behaviourism; it is now standard for cognitive scientists to act as though behaviourism is dead. This is simply not true. Behaviourism (in the guise of Applied Behavioural Analysis) is a flourishing field which is used in psychiatric and intellectual disabilities institutes regularly. The evidence for the success of using Applied Behavioural Analysis is overwhelming. Furthermore, the standard mode of attack against behaviourists have been refuted. I won’t go over the material here but refer my readers to my blog-posts (1) Poverty of Stimulus Arguments and Behaviourism, (2) PECS and Verbal Behaviour, (3) Some Behavioural Techniques and The Idea of a Blank Slate.  In these blog-posts I demonstrate that standard arguments that behavioural techniques cannot work; poverty of stimulus arguments; the idea that children do not get corrected by their peers for incorrect grammatical assertions and even if they did they would not make use of these corrections are all false (See Pullum and Schulz 2002, Hart and Risely 1995, and Choinard and Clark (2002). I think my data clearly shows that Fodor and Pylyshyn’s confident assertions aside behaviourism is alive and well.

On the issue of the Piagetian paradigm being dead and buried again Fodor and Pylyshyn are massively exaggerating. It is true that many contemporary developmental psychologists disagree with some central claims of Piaget. This though is standard in science. Darwin didn’t have a mechanism with which to explain heredity; though we do now. Nobody thinks that this fact refutes the theory of evolution. Piaget was wrong in his views of pre-linguistic child development; children have sophisticated concepts of objects which go beyond anything that Piaget imagined. Piaget didn’t really make use of the preferential looking tasks developed by Fantz. For this reason his research underestimated the children’s cognitive abilities. But it is worth noting that the researchers who have refuted aspects of Piaget’s developmental theory don’t consider themselves as smashing a paradigm. Writers like Markman, Carey, Spelke et al typically start their books by stating their admiration for the pioneering work of Piaget before stating where they disagree with him. I have written about some of this research in my blog post ‘Indeterminacy of Translation and Innate Concepts’, here I argued that Spelke et al have shown that aspects of Piaget’s theory was wrong (though I don’t think that any of this evidence demonstrates that children have innate concepts). But neither I nor any of the authors working in developmental psychology argued that Piaget’s developmental psychology was destroyed. I think Fodor and Pylyshyn really should think before using such wildly exaggerated assessments of various psychological schools. Somehow I doubt that this will happen though; because pretending that rival schools of thought have been smashed in to nothing helps them with their I am the only president you’ve got style arguments:

“But not liking a hypothesis is not, in and of itself, grounds for rejecting it; one has the obligations of refuting it and offering serious alternatives. To our knowledge, neither of these burdens has been met by the (many) cognitive scientists who disapprove of BCS; and we doubt that either is likely to be in any future that we are now able to foresee. And any old theory is better than none: out of nothing comes nothing” (ibid p.17)

Their claim that there are no serious opponents to their views is dubious at best. It is a political manoeuvre where you throw mud at your rivals and declare to the voters look I am not great but at least I am not as bad as that guy. If Fodor and Pylyshyn want to declare all opponents of their view dead they owe it to their readers to seriously engage with their rivals. This is science not politics. What is important is evidence not rhetoric.

Second Quibble:

From page 20-32 they present a series of arguments as to why concepts cannot be images. I think there are some serious arguments in this section. And I find myself largely in agreement with their arguments. They have four arguments against concepts being images (1) Concepts can’t be images: Some concepts apply to things that cannot be pictured. (2)Black Swan Argument: We cannot form images that show that all swans are white. (3) Concepts have Constituent Structure but Images only have parts. (4) Leibniz Law: concepts cannot be images because there are no images in the brain. I think that in this section they do a good job of arguing that concepts are not images. However I have one or two quibbles that I want to briefly discuss here.

They begin their critique of the idea that concepts are not images by discussing George Berkeley’s famous critique of John Locke who claimed he could form an image of the abstract idea of a triangle. Berkeley argued that it is impossible to form an abstract idea of a triangle:

“If any man has the faculty of framing in his mind such an idea of a triangle as is here described, it is in vain to pretend to dispute him out of it, nor would I go about it. All I would desire is that the reader would fully and certainly inform himself of whether he has such an idea or no. And this me thinks can be no hard task for anyone to perform. What is more easy than to look into his own thoughts, and there to try, whether he has, or can attain to have, an idea that would correspond with the description that is here given of the general idea of a triangle, which is neither, Oblique, nor Rectangle, EquilateraL, Equicrual, nor Scalene but all and none of these at once.” (Berkeley: Principles of Human Knowledge Sec 10)

Pylyshyn and Fodor use this argument of Berkeley’s as a demonstration that there are some concepts we have that we have no image of. However they extend Berkeley’s argument further and claim that it shows that concepts cannot be images. They argue for this by noting that our concept of a particular person John is too abstract to be captured by a particular image of John as we know him through all of his changes. I think this is largely right. Even though people with Synaesthesia do typically associate various ideas with particular images there is no evidence that I am aware of that the particular concepts they have are exhausted by the images they associate with the concepts.

Nonetheless there are particular aspects of their argument which I have concerns with. Firstly the argument cites Berkeley who claims that he cannot form a particular type of mental image; a vague image of an abstract idea of a triangle. David Berman (2009) has argued convincingly that George Berkeley was an eidetic imager and part of his critique of Locke was derived from the fact that he was relying on his own introspection of how HE thought. As an eidetic imager Berkeley used vivid images in thinking about abstract ideas hence the idea of forming a vague image of an abstract triangle seemed impossible to him. Locke who wasn’t an eidetic imager would not need to a form a vivid image an abstract idea of a triangle to think about it. Hence the dispute between him and Berkeley was partly to do with their different abilities to form mental imagery. The interested reader should read Berman’s ‘Berkeley and Irish Philosophy’, and his ‘Philosophical Counselling for Philosophers’ for evidence that Berkeley was an eidetic imager and that this fact had serious implications for his type of philosophy. I won’t try to prove the point about Berkeley here I bring it up merely to point out that we shouldn’t just assume that all people have or use mental imagery in the same way. We know since Galton, and James that people have different abilities to form mental imagery, Kosslyn has added some interesting neuroscientific data to support the idea that people’s ability to form mental imagery varies. Ultimately I don’t think that people’s ability to form mental imagery substantially affects the point that Fodor and Pylyshyn are making. I am not aware of any eidetic imagers whose images exhaust the concepts that they have. As Wittgenstein in the ‘Philosophical Investigations’ showed convincingly any mental images have to be interpreted and these interpretations are conceptual in nature. So the images themselves are not the concepts but are interpreted in light of concepts.

Nonetheless I think that Pylyshyn who more than Fodor has worked on mental imagery over the last 30 years should at least deal with variations in people’s abilities to use and form mental imagery before claiming definitively what images are or are not. The connection between say a weak imagers concepts and images, and an eidetic imager’s concepts and images may be very different. A person who has no mental imagery may have different types of concepts to a person who is a mental imager. When Fodor and Pylyshyn argue that concepts are not images they need to deal with the empirical fact that people have different abilities to form mental images.

Third Quibble:

Fodor and Pylyshyn operate with a classical view of concepts and seem happy to ignore the mountains of empirical evidence that shows that the classical concept of ‘concept’ is false. In their discussion of why concepts cannot be mental images they make the following claim:

“Likewise, you can picture a chair, but not the property of being a chair, which is what all chairs have in common as such, and so on. The problem, in all such cases, is that the property shared by all and only things in the extension of the concept (i.e. Things the concept applies to) can’t be pictured” (‘Minds without Meanings p. 24)

Now whatever one may think of their views on the nature of imagery their view that concepts of x apply to all and only things that have the property of x seems like a classical view of concepts. Here is George Lakoff’s gloss on the classical theory of concepts:

“…The common idea of what it means to be in the same category: Things are categorised together on the basis of what they have in common. The idea that categories are defined by common properties is not only our everyday folk theory of what a category is, it is also a principal technical theory-one that has been with us for more than two thousand years” (Lakoff: Women, Fire and Dangerous Things p. 5)

Lakoff notes that this view of concepts is folk assumption of the nature of concepts one that we have had with us since at-least the time of Plato (more than likely this view predates Plato). However, while this folk view of concepts has a long history and tradition behind it; there is little empirical evidence to support it. There are key aspects to categorising things in the world; three important aspects are metaphor, metonymy and chaining.

In his ‘Women, Fire and Dangerous Things’ Lakoff noted that there eleven key features of the non-classical theory of concepts. (1) Concepts are Family Resemblances, (2) Concepts have Centrality, (3) There is polysemy as categorization (4) Generativity as a prototype phenomenon (5) Membership gradience (6) Centrality gradience (7) Conceptual Embodiment (8) Functional Embodiment (9) Basic-level categorisation, (10) Basic level Primacy (11) Reference-point or Metonymic Reasoning. (ibid p.13).

Now obviously I cannot go into the evidence for all of these features of concepts that Lakoff argues exists, that would take an entire book as opposed to a blog. But I do think that some salient points are raised by Lakoff that are directly relevant to Fodor and Pylyshyn’s position on the nature of categorisation. Firstly Lakoff appeals to the fact that Wittgenstein showed that concepts were family resemblances, members of a category are related to each other but they don’t all share common properties with each other. Wittgenstein supported his conclusion by an exhaustive examination of ordinary language concepts, and by the fact that philosophers since Plato have been searching for the essence of various concepts and the search has been in vain. Psychologist Eleanor Rosch empirically confirmed Wittgenstein’s view that concepts are family resemblances.

Now Fodor doesn’t deny Wittgenstein’s point that we cannot find necessary and sufficient conditions for the nature of particular concepts. But he argues that this merely shows that concepts are not definitions. He does however reject the second strand of Rosch’s argument that concepts are individuated by prototypes. Rosch noted that if the classic theory of concepts was actually correct; then since a concept is supposed to have the same essential property as all other members of the category, then any member would be as good an exemplar of the concept as any other. So, for example, with the category ‘Pet’ a spider would be as good an example of a pet as a dog. However when Rosch empirically tested subjects she discovered that people do in fact sort their categories interms of exemplars. People judge that some categories are more central members than others. Thus dogs are a better exemplar of Pet than are spiders.  This finding showed that despite the fact that the classical theory of concepts is entrenched in our tradition the empirical evidence favours a non-classical theory of concepts.

Given this experimental evidence why do Fodor and Pylyshyn seem to accept the classical view of concepts? Strictly speaking they don’t. They think that the evidence shows that since concepts are neither definitions nor prototypes we have to stop thinking of them as intensions that determine extensions. Fodor uses the lack of definitions discovered by philosophers to argue against the view of concepts as definitions. When it comes to concepts conceived as prototypes Fodor argues that concepts are compositional and prototypes don’t compose so they cannot be concepts.

Fodor gives two reasons why concepts cannot be prototypes: (1) The Uncat Problem: Fodor argues that the concepts cannot be prototypes because there are countless concepts which do not have proto-types. So he asks us to think of the complex concepts ‘The Uncat’ this appears to have no prototype. Fodor notes that if we were to say that, for example, a stapler was a prototypical example of the ‘uncat’ concept we would be led to absurdities. So we would be forced to assume that the more something is like a stapler the less it is like a cat and the less something is like a stapler the more it is like a cat. Fodor argues that since this conclusion is absurd we are forced to conclude that for a substantial amount of concepts they have no prototypes. Hence the theory that concepts are prototypes is false (Fodor: Concepts p. 101). (2) The Pet Fish Problem: According to Fodor prototype theorists explicate notions like falling under a concept interms of being similar to its exemplar. He further argues that a prototype theory can only respect the compositional nature of a theory if they accept that a thing’s similarity to the exemplars of a complex concept is determined by its similarity to the exemplars of its constituents (ibid p.102). But Fodor notes that this does not work for many many concepts. Thus he notes that with ‘PetFish’, a guppy maybe a prototype of a pet fish, it is not a prototype of either ‘a pet’ or of ‘a fish’. So Fodor argues that ‘PetFish’ shows either that the complex concept is not constructed compositionally or that concepts are not prototypes.

Fodor’s arguments rely on a fundamental misunderstanding of prototype theory. When setting out both arguments made he made the false assumption that prototype theory requires that for something to fall under a concept it must be similar to the exemplar in the concept. This is not necessarily the case. In his ‘Women Fire and Dangerous Things’ Lakoff examined the work of R. M. W. Dixon (1982) who analysed the categorisation systems used by the Dyirbal an aboriginal tribe in Australia. Dixon noted that though at first glance the categories made little sense when one analyses them closely one finds that they conform to the pattern described by prototype theorists.

I will give one example of how the Dyirbal categorise:

“Balan: women, bandicoots, dogs, platypus, echidna, some snakes, some fish, most birds, fireflies, scorpions, crickets, the hairy mary grub, anything connected with water or fire, sun and stars, shields, some spears, some trees, etc” (Dixon 1982 taken from Lakoff ‘Women, Fire and Dangerous Things’ p. 93)

Dixon analysis of the concepts of the Dyirbal found that they followed the following rules (1) Centrality: Basic members of the category are central, so things like ‘gar fish’ and ‘hairy mary’s’ are less central than ‘women’ in the balan category. (2) Chaining: Complex categories are structured by chaining; central members are linked to other members, which are linked to other members and so on, for example women are linked to the sun, which is linked to sunburn, which is linked to the hairy mary grub. (ibid p. 94) Note number 2 contradicts what Fodor’s claim that prototype theorists argue that concepts must be similar to the exemplar of the concept. This claim is simply false. What prototype theory actually says is that the members of a category can be traced back to the main exemplar via a chaining member not that all members of a category have to be similar to the exemplar in a relevant way. Fodor’s objections to prototype theory are really just objections to his incorrect version of what proto-type theory is. (3) Experiential Domains: There are basic domains of experience, which may be culture-specific. These can characterize links in category chains.  (4) No common properties: Members of a category do not need to share a common property with each other. I won’t list all of the rules Dixon uncovered but the ones I have listed show that Fodor fundamentally misunderstands the nature of proto-type theory.

I have briefly discussed Fodor and Pylyshyn’s major assumptions their ‘Minds Without Meanings’ and have shown which assumptions I agreed with and disagreed with. I have also outlined three minor quibbles I have with their negative critique of contemporary theories of concepts. (1) They caricature rival theories. (2) They ignore variation in people’s ability to form mental imagery. (3) They misunderstand the nature of the proto-type theory of concepts and hence are attacking a target they don’t understand. In my next blog I will evaluate their positive theory of the nature of reference.

Behaviourism: A Mental Health Risk?

“Can anyone live as a committed behaviourist without separating their philosophy from lived experience? I doubt it. B.F. Skinner’s life embodied a fenced-off polarity between a scientific life of disciplined objectivity and an inner life of lush and conflictful subjectivity.” (Baars ‘The Double life of B.F. Skinner p. 15)

Radical behaviourism is typically attacked in cognitive science in two different ways. (1) It is argued that Behaviourists don’t postulate enough innate apparatus to account for various different aspects of cognition. Chomsky is largely responsible for this type of criticism of Skinner; his ‘Review Skinner’s of Verbal Behaviour’ is accepted by everyone in cog-science as a refutation of behaviourism. People who have never read ‘Verbal Behaviour’ or even Kenneth Macquordale excellent reply to Chomsky’s review are certain that Chomsky refuted Skinner. People also bizarrely accept Pinker’s claim that Behaviourists are committed to a blank slate ideology despite the fact that all of the evidence goes against this in belief of Pinker’s. (2) The issue of whether behaviourism can account for conscious experience. Philosophers and scientists have criticised Skinner on this issue for years. In his 2003 paper neuroscientist Bernard Baars attacked B.F. Skinner on the issue of consciousness he argued that Skinner’s life was at odds with his behaviourist philosophy. He told a psychoanalytic story where Skinner who failed at becoming a writer ended up swinging in the opposite direction and publically attacking subjective experience and literature. Baars tells a story of emotional turmoil which is revealed in three key moments. (1) Skinner as a child wanted to be an artist but had that dream crushed. (2) Skinner wrote a short story where the main character Elsa gave up her freedom to accept an unwanted fate. (3) Skinner’s ‘Walden Two’ has two protagonists who he admits represented him (though he didn’t realise this when writing the book) which shows an unconscious struggle between two different sides of him the artist craving freedom and the objective conformist behaviourist. He thinks this shows an unconscious unresolved conflict was bubbling under the surface of Skinner.

His essay is extremely speculative, but is both interesting and entertaining. Nonetheless I disagreed with a lot of it. I have some issues with his characterisation of the philosophical reaction to Skinner’s project. Baars notes that Russell, Ryle, and Wittgenstein endorsed Skinner’s behaviourism. While Ryle and Wittgenstein did admire the work of Skinner they were not radical behaviourists Ryle’s logical behaviourism was an entirely distinct research programme from Skinners. Likewise, Russell greatly admired Skinner but he was critical of aspects of Skinner’s behaviourism (See, for example, his Analysis of Mind and Inquiries into Meaning and Truth). Baars fails to mention Quine whose account of language acquisition is very similar to Skinners. More than any philosopher Quine tried to develop Skinner’s behaviourist account so it is a strange oversight that Baars doesn’t mention Quine and instead focuses on philosophers whose views are much less congenial to Skinner’s research programme.

Baars claimed that even today many philosophers are crypto-behaviourists, though he gives little evidence to support this claim aside from briefly citing the work of philosopher George Rey and Wilkes. But Baars merely says he disagrees with their views on consciousness and doesn’t specify why the views are wrong or why they indicate a crypto behaviourism. There are obviously actual behaviourists working in Applied Behavioural Analysis but there is nothing crypto about them, and since Baars doesn’t engage with any of this scientific research he is in no position to criticise it.

As part of his psychoanalytic interpretation of Skinner Baars notes that Skinner would regularly walk home reading Beaudelaire; and argues that this is somehow inconsistent with Skinner’s experimental behaviourism. I really see no inconsistency here. Saying that x is not an appropriate subject for scientific analysis does not mean that one must ignore it in daily life. There is always a gap between our daily pragmatic interactions with the world and our best scientific theory. Is an engineer who understands the theory of relativity and who uses Newtonian calculations for convenience guilty of some radical incoherency? In our practical engagement with the world we don’t act like perfect logicians we do things for amusement, for convenience etc but when doing our science we obviously try to be more rigorous. The fact that Skinner like all scientists didn’t apply his theory to every aspect of his life is not a valid criticism of him; it merely shows that he is a human. I have discussed this fact in more detail in an earlier blog ‘Theoretical Research and Practical Behaviour’.

Baars repeatedly argues that Skinner believed that consciousness did not exist. While it is true that Skinner was somewhat inconsistent about this issue (See Dennett: Skinner Skinned for an analysis), the central thrust of Skinner’s views was that consciousness was not a proper subject of scientific study. He was pretty much agnostic about the actual existence of consciousness. Its notable that Baars doesn’t cite actual quotes from Skinner to prove his case instead he goes into a discussion of Watson. This is a strange move considering that Watson and Skinner advocated different kinds of behaviourism. Watson did actually deny the reality of things like mental images despite claiming to have been a vivid mental imager earlier in his career (see Berman and Lyons ‘The First Modern Battle for Consciousness: J.B Watson’s rejection of Mental Images’ (2007)).  But since Watson’s views on mental imagery have no bearing on Skinner’s views it is difficult to understand why Baars brings them up; perhaps it is because he can find no evidence from Skinner’s writings on the issue.

When he does quote Skinner he notes that he tried to translate some mentalistic terms into behaviouristic terms. He cites Orwell’s essay about ‘Politics and The English Language’ where Orwell warns about the use of cosy euphemisms which serve to block us from direct access to what is actually going on in the world:

“People are imprisoned for years without trial, or shot in the back of the neck or sent to die of scurvy in Arctic lumber camps: this is called elimination of unreliable elements, such phraseology is needed if one wants to name things without calling up things without calling up mental images of them.” (Orwell: 1946)

Baars speculates that when behaviourists translate the idioms of folk psychology into behavioural language one may end up divorcing oneself from his own lived experience.  Personally I have no problem with the operational definitions typically used by Applied Behavioural Analysts. The definitions give a measure of objectivity and making prediction and control of behaviour easier. Baars speculation that these operational definitions may well be true (or false); however he presents absolutely no evidence to support the speculation.

If Baars really believes his hypothesis I think he should try to test it empirically. Perhaps through questionnaires, along with MRI scans, on behaviourist and non-behaviourist psychologists to see whether behaviourists are emotionally different. Or through studies of psychologists who have mental breakdowns to see whether behaviourists are more likely than cognitive scientists, or psychoanalysts to have breakdowns. The fact that Baars doesn’t even suggest any tests like this indicates to me that he is not very serious.  That he is using the Orwell claim as an ad-hominem attack on behavioural analysts. Whatever his motives for making his claim there is no reason to take it seriously until Baars provides some evidence to support his wildly speculative (and insulting) claims about behavioural analysts.

While I don’t agree with Baars speculative claims about behaviourists being cut off from their own experiences, I am in full agreement with him that Skinner did hold back our scientific understanding of consciousness. Skinner was not a blank slate theorist, he did not deny that consciousness existed; but he like a lot of behaviourists  was too timid a theoriser. Behaviourists emphasised that they allowed any innate apparatus which could be determined experimentally. In practice their theories made much less use of innate apparatus than other psychologists. This is because they thought that postulates were a sign of bad science; we are only justified in saying something exists if we can prove this experimentally. But that is just batty science; a science that uses this methodology will proceed at a snail’s pace. Popper was surely correct that bold postulates that go beyond observation and have not yet been justified experimentally are the life blood of science. There is nothing wrong with these postulates as-long as we treat them critically and try to find ways to verify and falsify the postulates.

This timid methodology didn’t just apply to innate apparatus but to consciousness studies as-well. Behaviourists worried about the subjectivity of consciousness, about the fact that it could not be measured by objective scientific instruments. Watson in his (1913) ‘Psychology as the Behaviourist Views It’ pointed out seemingly interminable debate between the followers of Wundt and the followers Titchner. He correctly noted the various methodological flaws in introspectionist psychology. But he simply gave up too soon. It seemed at the time that we simply lacked the tools to study consciousness objectively. Theorists like Crick, Koch and Baars have shown otherwise. Their success[1] came through bold hypothesis, and not viewing methodological concerns as impossible to overcome. Behaviourists were wrong on the issue of consciousness because of the timid nature of the methodology not because they were blank slate theorists or mentally ill zombies.

[1] I am not suggesting that we are anywhere near being able to solve the hard problem of consciousness or the relation between personal and subpersonal explanations. I am merely saying that the study of consciousness has made progress that most behaviourists doubted would be possible.

Weinberg: The Discovery of Modern Science

Stephen Weinberg’s (2015) book ‘To Explain the Modern World’ is an interesting attempt to explain how modern science has developed. He begins with pre-Socratic philosopher/scientists like Thales, and Parmenides and works his way through Plato and Aristotle to the Hellenistic age with such figures as Ptolemy and moves on to consider the work of Descartes, Galileo culminating in the work of Newton. At the end of the book he considers post-Newtonian physics briefly but the central aim of the book is to explain how science developed up until the magnificent work of Newton.

There are a lot of good points in the book. One really welcome aspect of the book is semi-technical appendixes which help understand in a bit more detail some of the logic and mathematics behind the various theories he is explaining. These appendixes are easy to follow and help give some much need content that is often sorely missing in other pop-science books. Furthermore he doesn’t present the entire world pre-Copernicus as filled full of raving idiots. Some scientists who don’t bother actually study pre-scientific attempts to understand the world often pretend that all theorists prior to the scientific revolution were pretty much making things up as they went along. Weinberg avoids this trap and manages to illustrate the sometimes sound reasoning that went in to the various pre-scientific cosmologies. That said Weinberg has no difficulties criticising theorists for getting things wrong either on their own terms or according to our present day lights. He explicitly sets himself up against the view attributed to Kuhn (in his more radical phase) where we are not really in a position to judge different scientific paradigms because some paradigms are incommensurate and cannot judged by the same standards.[1] Weinberg is defending the idea of scientific progress and has no problem criticising earlier theorists for either getting things wrong by our lights or on their own terms. Here he is on Aristotle:

“But if Aristotle really did present the scheme presented in ‘The Metaphysics’ , then this cannot be explained as a matter of thinking in terms differently from ours, or being interested in different problems from ours. We would have to conclude that on his own terms, in working on a problem that interested him, he was either careless or stupid” (‘To Explain the World’ p.84)

Weinberg’s irreverence is refreshing he criticises any idea or theory which he finds lacking in evidence.

His discussion of the Hellenistic era is particularly interesting. He argues that despite the typical historical narrative of the world of Plato and Aristotle being the pinnacle of pre-scientific thought in fact the Hellenistic era gave us superior results in science (by which he means pretty much means physics) than anything Plato or Aristotle gave us. The emphasis in the Hellenistic era on practical inventions as opposed to grand speculative narratives Weinberg conjectures lead to them making more progress in physics than their predecessors.

Despite the fact that I enjoyed the book and learned a lot from it there was aspects of it that I was less armoured with. One constant irritation was Weinberg’s focus on nothing but physics (and to a much smaller degree chemistry). This choice was obviously justified by the fact that Weinberg as a Nobel Prize winning Physicist knows the history of physics much better than he knows the history of the development of the other sciences. However when a book has a subtitle ‘The Discovery of Modern Science’ I think the author owes us something more than a history of one branch (albeit the most successful one) of science.

His focus on one area science makes him give some dreadfully one sided caricatures of various philosophers. A case in point is Plato. Anybody who has studied Plato to any degree at all will admit that he wasn’t a physicist in any sense of the word. While some pre-Socratic Philosophers could be considered in a very crude sense proto-physicists, and likewise Aristotle could be considered a crude proto-physicist. Plato on the other hand had no interest in cosmology and when he discusses it he merely offered crude creation myths (For example see the end of The Phaedo). Because of this fact Weinberg dismisses Plato as a poetic thinker who was not really concerned with accurately representing the world but rather was concerned with aesthetic matters.

This is a stunningly unfair characterisation of Plato. Plato discussed many different topics from human nature, to the best form of human society, the nature of mathematics, ethics, etc. Plato’s work on innate ideas was one of the first examples of what later became to be known as a poverty of stimulus argument. Chomskian linguistics used the same logic to argue for an innate grammar over the last fifty years. Chomsky explicitly acknowledges his dept to Plato by naming his poverty of stimulus arguments in linguistics; Plato’s problem. Furthermore Plato’s tripartite theory of the mind prefigured both Hume and Freud. His views on human nature were formed by both empirical observation and logical analysis typically achieved through systematic dialogue. His views should be considered early attempts to do psychology not as some crude poetry unconcerned with discovering the truth.

Plato’s theory of what a just society should be also took account of things like human nature and tried to accommodate what he thought about human nature into how we should build our society. He was spectacularly wrong at times. But his views were very important steps on the way to understanding subjects like sociology, psychology and political science as well. So I think Weinberg was entirely wrong to dismiss Plato as a spinner of myths, and I believe he was only lead to this because of his excessive focus on physics as the only real science.

When he appeared at the recent ‘Moving Naturalism Forward’ conference Weinberg offered his theory of ethics. His crude position amounted to a claim that he doesn’t care what a utilitarian ethics teaches us he and most people are simply going to look after their own and not make the sacrifices that people like Singer recommend. Weinberg didn’t really offer much justification for his views and wasn’t really pressed by the other panellists. Presumably he had some idea that our human nature will prevent us from being strict utilitarians. Perhaps he is right but his claim reminded me of the confident assertions Socrates foils often made in Plato’s dialogues. Socrates often managed to convince his opponents they were incorrect by drawing them into a series of contradictions implicit in their confident claims. I think Plato’s dialogues remind us that our express views on ethics are not always as internally consistent as we think. And that testing our intuitions using various thought experiments gives us a way of weeding out our contradictory beliefs. Plato here helped us on the road to understanding our ethical beliefs better. And this is now a flourishing area of science and philosophy.

I mentioned earlier Plato’s views on innate ideas and how he integrated his views on human nature to his theory of the ideal society. This approach was maintained throughout the history of political philosophy; thus Hobbes and Rousseau’s differing views on human nature influenced their views on the way we need to build a just society. The debate between Descartes and Locke on innate ideas was a follow on from Plato’s arguments. Locke and Descartes appealed to anthropological evidence, historical evidence, psychological evidence and logical arguments in their debate. This debate has been replaying itself throughout history with different protoganists like Skinner and Chomsky arguing for different amounts of innate apparatus. Some of this was detrimental to psychology as it polarised people too much in one direction or another but it is undoubtable that Plato, Locke and Descartes played a huge role in the development of psychology, political science, linguistics etc.

I think that by ignoring the role these philosophers played in developing sciences other than Physics Weinberg did the philosophers a huge disservice. He also did science a huge disservice by again overplaying the importance of physics and chemistry in relation to all of the other sciences like psychology, biology etc.

[1] I won’t get into the intricacies of Kuhn’s philosophy of science. I will merely mention that it is much more nuanced than Weinberg seems to think.

Concepts: A reply to Jim Hamlyn and John Kubie

Neuroscientist John Kubie criticised a recent claim I made in my Blog-Post ‘What is a Concept and What Creatures Have them’. He argued as follows:

 “Your second sentence “A concept is to some degree a theoretical term that we use to explain, predict and control the behaviour of various different creatures” seems wrong to me. For me the core sense of “concept” has nothing to do with communication or control of other creatures. It has to do with aggregation of diverse stuff into a single “symbolic?” Entity. It’s a logical operation.”

I think what I wrote in this instance was a bit ambiguous so it may need some clarification. I was speaking about the evidence we use to justify our attribution of a concept to another creature and noting the importance of behavioural evidence in such an attribution. So, for example, consider the paradigm where researchers try to discover what conceptual abilities pre-linguistic children have through monitoring their surprise reaction. A child is typically habituated to a particular phenomena, e.g. a ball passing behind a screen and coming out the other side. However, in experimental settings after the child is habituated to the particular phenomena they are presented with strange behaviour of the ball. So one ball may pass through a screen and two balls will come out the other side. Or a ball may go behind the screen and not come out the other side. If a child looks surprised at these events we will assume that they have some expectations of how objects should behave and are surprised that the objects do not typically behave in this way. Now Piaget and Quine both criticised this experimental procedure claiming that it merely shows that children have expectations of continuity in their environment not that they have concepts of Object, Number etc. And in recent times Carey, Soja have made the experiments more sophisticated to avoid the objections of Piaget et al. Now my point here isn’t to evaluate the theories of Carey et al but it is merely to indicate that in these experimental settings Concepts are attributed to children because of the explanatory role they play in the experimental theories of theorists. Or outside of the lab people may attribute a concept a fly to a frog because of its behaviour around flies (in this instance they would probably be wrong). In these settings we use our attributions of concepts as part of an explanatory schema to explain what the creature will do, and will modify our attributions if the creatures behave in ways that we did not expect.

When John argued that:

 “The core sense of “concept” has nothing to do with communication or control of other creatures. It has to do with aggregation of diverse stuff into a single “symbolic?” Entity. It’s a logical operation”

I am confused as to what he meant in the above claim. Is he offering a theoretical definition of the term ‘Concept’ which he thinks will make our theorising easier?  Is he in a sense offering a convention as to how the word ‘Concept’ is to be used in scientific theorising? Or is he arguing that his definition corresponds with the ordinary language use of the word ‘Concept’? I would argue following the philosophical work of Wittgenstein and the scientific research of Lakoff and Rosch that there is no one accepted meaning of ‘concept’. Rather the word concept is a family resemblance term (though people may have a prototype of concept in mind then they use the term).  Either way I would need to know what John has in mind when he argues that a concept is “an aggregation of diverse stuffs into a single symbolic entity” before I can adequately reply to his comment.

In my blog ‘What is a Concept and Which Creatures Have Them’ I criticised philosophers who think that only language using creatures have concepts. I argued that these theorists are acting like Platonists about the nature of concepts and ignoring recent empirical research into the subject. Philosopher Jim Hamlyn has long argued that animals have no concepts unless they are language users and tool users.   So, for example, while Dolphins may be one of the most social and intelligent animals on the planet because they don’t use tools he would argue we are not justified in attributing conceptual abilities to them. In his recent very interesting blog ‘Tools in the Work Shop of Language’ Jim defended and clarified his position on animal concepts.

One thing very clear from the blog is that Jim adopts a guilty until proven innocent approach to attributing concepts to non-linguistic animals. He argues as follows:

“Secondly, the assumption that certain sophisticated behaviours can only be explained by concept-possession is only justified if every other explanatory alternative has been ruled out.”  
This seems a very strange criterion to offer as when a theorist is justified in attributing a concept to another creature. A theorist typically doesn’t say that a behaviour can only be explained by concept attribution; rather they would argue that attributing conceptual abilities to these creatures is the best way of explaining their behaviour.

By setting things up in this way Jim gives himself an easy task.  He argues that people who attribute concepts to non-human animals claim that ONLY conceptual explanations are possible, so If Jim can tell any story that doesn’t attribute conceptual abilities to non-human animals he has automatically refuted his opponent. Except of course he hasn’t refuted his rival theorist because the rival theorist claims that the conceptual explanation is the best explanation not that it is the only one.

Jim seems to start with the assumption that we should assume from the outset that favouring the conjecture that animals have no concepts is favourable to the view that they have concepts. Here is an example of this:

Experience leads to the development of expectations about the regularities of the universe and of unfolding events. I propose that we rule out all possible nonverbal explanations before we are tempted to ascribe capacities of conceptualisation to nonverbal creatures.” (Hamlyn ‘Tools in the Workshop of Language’)

By adopting this question begging approach he  shifts the burden of proof onto his opponents. This approach is though problematic, it is not always easy to decide whether view A or view B should be accepted as our null hypothesis. In her recent book Kristin Andrews spelled out the difficulties in deciding what our null hypothesis should be:

“But perhaps a greater problem arises at the point of deciding on the null hypothesis itself, because beginning an investigation of a property with a sceptical view may introduce against animals having that property. When our concerns are purely epistemic, as they presumably are in the case of animal cognition, it isn’t clear why either the sceptical or optimistic hypothesis should get preferential treatment from the outset” (Andrews: ‘The Animal Mind’ p. 42)

I think Andrews has a point; in debates about whether we should attribute various mental abilities to animals or not people do seem to operate with the null hypothesis that it is simpler to avoid attributing things like concepts to animals if we have an alternative story. Marion Stamp Dawkins in her ‘Why Animals Matter’ adopts this approach.  Andrews notes that one there are some good reasons to not adopt the null hypothesis in the way people like Dawkins does. She cites Eliot Sober (2012) and Frans de Waal (1991) have argued that given facts about convergent evolution, and continuity of life on earth the opposite assumption should be made. Sober’s idea is that if humans and closely related species (Apes) exhibit both exhibit behaviour X and humans produce behaviour X by occupying mental state m then we have evidence that M is the proximate mechanism that the chimpanzees employ in producing X (ibid p.43)[1]

I personally prefer to adopt a wait and see approach; I think Sober’s approach is ambiguous as to how far down the species we can go and assume mental continuity. While those who prefer to make the opposite assumption typically do so by arguing that their approach is simpler but they typically never specify precisely how they are judging simplicity. In my view if we have two theories one of which says that Dogs are concept users, and the other which says they are not, if we do not have sufficient evidence to decide the issue we should continue our search for the correct answer; not make question begging assumptions as to which approach we should assume is true by default.

As his blog-post goes on Jim argues that Jerry Fodor’s language of thought thesis is impeding progress into the study of animal mental capacities. This is news to me. Fodor’s influence in cognitive science is pretty much a whipping boy. He is one of the most cited theorists in the philosophy of mind but is typically only cited so people can say why they disagree with him. Researchers in to the conceptual abilities of pre-linguistic children such as Spelke, Carey, Soja, Markman etc have all been very critical of Fodor’s views on concepts. Susan Carey in her (2011) book ‘The Origin of Concepts, is very critical of Fodor’s views on Concepts. I rarely see his name mentioned in the animal ethnology literature I think to focus on Fodor’s views on animal conceptual abilities in the context of this discussion is radically wrongheaded given how most cognitive scientists disagree with him.

When dealing with proponents of a particular position I think it is best to focus on the best theorists holding a particular position. In defence of his position Jim mentions the arguments of Davidson as being very compelling against the position that animals have concepts. In an earlier blog evaluated Davidson’s arguments in detail and critiqued them all. I think Jim would not find it very helpful if I pretended that Descartes (who held that animals were unthinking machines) represented the best current evidence in favour of Jim’s position. Likewise I respectfully submit that Jerry Fodor doesn’t represent main stream cognitive science and if Jim wants to criticise representational views in Cognitive Science that refuting Jerry Fodor is not the way to go.

Jim ends his blog-post with an interesting discussion of what he takes to be a problem with attributing concepts to animals.  He notes correctly that an animal learning a few proper nouns does not mean that the animal can manipulate them and combine them in the way language users do. He argues that a creature who has concepts but who doesn’t have the skills to use them in productive ways has nothing more useful than a stone, or a stick that they don’t know how to use.

I think Jim’s emphasis on the analogy between concepts and tools is a good one and warrants further exploration. When we attribute concepts to animals we are arguing that they can use these concepts to think and decide what to do. When making this claim we need to be very particular about what we are claiming. Any claim than an animal can use concepts in some internal language of thought by combining them in certain ways needs to be very clear in the behavioural evidence for how these concepts can be combined and cannot be combined. This type of evidence is rarely if ever presented and until it is I think sceptics about animal concepts will have a point. In his (2012) paper ‘Why we cannot say that Animal’s Think’ Joseph Beck argued that we are not justified in assuming that animals have concepts because we have no theory of how they systematically use the concepts. Beck followed Evans (1982) in arguing that if we are to attribute conceptual abilities to a creature the creature must be capable of using these concepts systematically. Beck was responding to Susan Carey[2] work which attributed concepts of numbers to some animals. By analysing the experimental evidence of Pigeons behavioural responses he demonstrated that the Pigeons were incapable of using these concepts in a way that represented systematicity and generality constraints. So he concluded that we are unjustified in assuming that Pigeons are thinking using concepts. I think that this constraint is a good one to help refine research into the area (Andrew’s claims that Caruthers (2009) demonstrated that animals do satisfy a weak version of the generality constraint), those of us who claim that non-linguistic animals have concepts need to meet this criterion. So this leaves a lot of empirical leg room in the debate and I think Jim has a point when he argues that the ability to use a concept for various purposes (which requires some kind of systematicity), is the key area to be researched.

[1] Andrews got the quote from Sober (2012) pp 3-4

[2] Here I am following Kristin Andrew’s explication of Beck’s work in Andrews (2015)

Constructed Worlds, Forms of Life and Intellectual Disabilities

In his ‘Philosophical Investigations[1]’ Ludwig Wittgenstein discussed simple language games as a way of exploring the nature of our own language. One of the themes discussed early in the PI was ostensive definitions and how they could be used to teach that a word refers to number, shape, colour, a whole object: e.g. Slab, or whether a word is being used as a demonstrative; this and that. Wittgenstein’s discussion reveals a kind of poverty of stimulus in learning a word through ostension. So if a person points to something and says ‘Blickiri’ it is unclear purely by ostension whether the person is referring to a colour, a shape, an object, a number etc. Wittgenstein notes that we could limit possible interpretations of ‘Blickiri’ if it was prefixed by ‘Is a colour’, ‘Is a number’ etc. However, obviously if the person we are trying to teach by ostensive definitions already understands predicates of our language like ‘Is a colour’, ‘Is a number’ etc they would already understand complex aspects of our language and hence are not being taught from scratch by ostension.

On the picture sketched by the later Wittgenstein we learn our language through our shared form of social life with our peers. Throughout the PI Wittgenstein analyses how words get their meaning; and shows that supposed bearers of linguistic meaning such as private mental images, are not sufficient to the task of providing linguistic meaning. His emphasis on the social determination of meaning, shared practices, and actual linguistic use broke philosophy away from its traditional a priori methodologies. Instead of arguing that concepts MUST have a certain structure, or that we MUST be following certain determinate rules when reasoning, Wittgenstein asked us to look at our practices. This approach if adopted makes philosophy more pragmatic and interested in our actual interactions with the world as opposed to a purely speculative or armchair discipline.

Martin Heidegger’s dense prose and profound sounding claims about the nature of ‘Being’, ‘Language’ and ‘Concealment’, ‘Technology’ etc at a first glance seem to be far removed from the pragmatic spirit of the latter Wittgenstein’s philosophy[2] Furthermore Heidegger’s later philosophy is no more compatible with pragmatism than is the philosophy the early Wittgenstein espoused in his ‘Tractatus’. However, it has often been noted that aspects of ‘Being and Time’ with its focus on everyday life and our concernful engagement with the world, and the role of implicit awareness, and embodied engagement with the world is also subject to a pragmatic interpretation.

While Heidegger argued against a psychological interpretation of his work it has nonetheless been used a great deal in both psychology and psychoanalysis. Robert Stolorow and George Atwood are excellent scholars of Heidegger and psychoanalysts who have managed to combine Heidegger with Freud. Their work has greatly enriched some Freudian concepts and has helped in understanding the lived world of people suffering from psychosis. Clinical psychologist Louis Sass has also managed to combine the insights of the later Wittgenstein and Heidegger’s ‘Being and Time’ as a way of understanding the lived experience of people with Schizophrenia.

In this blog-post I want to discuss the social world that is partially constructed for people with severe intellectual disabilities. My aim is not provide a scholarly discussion of the philosophy of either Wittgenstein or Heidegger, rather I will use some of their concepts to help me elucidate the world of people with severe intellectual disabilities is partially socially constructed. In an earlier Blog-post ‘Radical Interpretation and Intellectual disability: The principle of Charity and The Typical Mind Fallacy’ I tried to explore some of the challenges of interpreting how people with various forms of intellectual disability experience the world. In that work I called for more descriptive work similar to the work Oliver Sacks has done on people with neurological disorders to help health care workers understand the subjective world of the people they care for. The emphasis in that blog post was on the subjective world of people with intellectual disabilities and the challenges of interpreting it. In a later blog-post ‘Some Behavioural Techniques and The Idea of a Blank Slate’ I briefly discussed some behavioural techniques that are useful for dealing with people with intellectual disabilities who engage in challenging behaviour. Here the focus was on objective features of the environment and behavioural techniques that reduce the probability of a particular behaviour occurring. In this blog-post I aim to discuss an intermediate state: the lived world of people with intellectual disabilities. Here my focus will be on intersubjective communication, and the social world that people with intellectual disabilities sometimes live in.

In the 1960s when a child was discovered to have an intellectual disability a lot of the time the child would be immediately abandoned to state care. Things now a days are much different, people in general have a greater grasp of what having a child with intellectual disabilities actually entails. The stigma once associated with having a child with intellectual disabilities has now been greatly eroded. Furthermore there is much greater social support for the parents of a child with intellectual disabilities than was previously available. This isn’t to imply that things are perfect, this is obviously far from the case, many challenges still exist. My point is merely that things have to a small degree improved.

When discussing the social world of a child with severe intellectual disabilities my focus will be on their social world in the institute they live in. As I mentioned earlier children are not automatically placed in care when they have severe intellectual disabilities[3]. There is a great emphasis the importance of the child staying with his family and being included in the wider community. Nonetheless caring for a child with severe intellectual disabilities, while perhaps caring for other children, is an extremely difficult job emotionally. To help parents to provide the best care possible respite services are offered to them where their child with severe intellectual disabilities can be cared for by community house which specialises in respite care.

Depending on level of care required by the child and the family circumstances respite care can begin at any age but it is typically provided for children as they get older and harder to manage. On average people will enter respite care from the age of 5 or 6. Like all children a child with intellectual disabilities is thrown into a pre-prepared world upon birth. A child is born into a family drama[4]; the parents give the child his name and typically have a room ready for the child along with fantasies about who the child will become. The child upon birth is immediately situated in a narrative created by the parents about the nature of the family and its relation to the external world. Wittgenstein correctly notes that to come to speak a language is to be immersed in a particular form of life. Different communities, societies and families have different forms of life. The new born child who is thrown into the world has to try and learn the various criss-crossing language games of the family or society they are born into. Parents of a child with intellectual disabilities often talk about a process of mourning that they undergo when they realise that the child they had imagined will never exist. They need to undergo this process of mourning their idealised child in order to get to know their actual child with his various impairments. Parents claim that eventually they just see their child as person and don’t see the intellectual disability after a while.  However there is a period where the child is not living up to the unconscious fantasy that the parents had created of him. It is difficult to speak in general about the experiences of children with intellectual disabilities as different disorders such as Williams Syndrome, Downs Syndrome, etc will result different difficulties in processing emotions, communicating etc. Furthermore, even people with particular syndromes will differ in their abilities and temperaments. So there is no one fit all criterion we can provide which will describe the challenges they will face growing up within their family these matters will have to be evaluated on a case by case basis. But while it is essential to take note of the child’s intellectual disability and particular temperament, it is just as important to note the form of life that one is to some degree imposing on the child and whether that form of life is suitable for the particular competencies of the child in question.

When the child is brought into respite care for the first time he will again be entering a prepared world where the staff working there will have a working theory of who the child is. Some of the staff (typically the nurse in charge) will have met the family to discuss the child and will be introduced to the child. The family will typically try to prepare the child for respite by telling him where he is going, showing him pictures of where he is going to respite, and bringing the child to visit the respite unit before his respite begins. The staff will prepare by reading files from psychologists, social workers, GP, reports from the child’s school and reports from the child’s parents/guardians etc.

Human nature being what it is; staff will have formed a picture of the child before he arrives. Evidence from the files and reports will help the staff form a picture of the child who is coming into respite. Staff upon hearing the nature of the disability, and facts about the child’s behaviour may view the child in comparison to various other people with intellectual disabilities they have looked after. Preparations for the child’s arrival will take place in various ways. One key issue will be what children is the child suitable to be placed with? Various social factors will have to be considered, if the child is prone to imitative behaviour placing him with other children who can be violent or engage in self injurious behaviour is not a good idea. Likewise if the child is finds noise intolerable then placing him with children who like to make loud vocalisations may not be a good idea.

When the child arrives into respite care for the first time, just like when he is brought home from the hospital, or his first day in school he is arriving at a drama prepared for him. The child is being cared for by nurses and care staff with fixed roles, the staff will have an idea of who the child is and how he will behave, and have prepared activities and environment accordingly.

Occupational Therapists have long recognised that a huge part of what makes us who we are is what we do. Our daily activities are a large part of who we are. We are not disembodied Cartesian egos, on the contrary we are embodied creatures immersed in and engaged with our world. Because of Heidegger’s focus on ‘Being-in-the World’ and his detailed phenomenological analysis of our behaviour as we engage with our lived daily experience a lot of occupational therapists and nurses incorporate aspects of Heidegger’s phenomenology in their research[5]. The importance of understanding everyday activities and how people find them meaningful in their daily activities lives cannot be stressed too much when considering the child entering respite care.

As we have already noted the child enters his new world with a place prepared for him by staff. Some children are verbal and some are non-verbal nonetheless it is important for staff and the new child to establish functional communication. If the child is non-verbal he will indicate what he wants by his behaviour, he will try to take what he wants, will get upset and may engage in self injurious behaviour if he cannot obtain what he wants. Even with children with severe intellectual disabilities can be taught to use at least some degree of symbolic communication. The PECS (Picture Exchange Communication System) which involves the child being given a book full of pictures placed on a page with Velcro is a useful system in this sense. Children who may not be able to use PECS to form actual sentences can still learn to use the pictures to communicate what they want. Lamh which is a simple form of sign language which lacks the complex syntax of ordinary sign language is also an effective way of communicating with others. However if children cannot master these devices to any degree staff will need to interpret the behaviour of the child e.g. crying is a form of communication that something is wrong. And the staff has to use whatever background knowledge he has of the child to discover what the child is.

The phenomenological approach I have sketched involves treating the child with intellectual disabilities as an intentional agent who engaged in his world and is trying to find meaning in a social world partly created by those who care for him. This approach recommends that those whose responsibility it is to look after these children with ensuring that the social environment is structured in such a manner that it is as easy as possible for the child to immerse himself in the shared form of life of his peers.

There has been some regrettable disputes about the importance of phenomenological research in occupational therapy and nursing. Some proponents of its use have claimed that the phenomenological approach is an alternative to scientific third-person accounts. This has lead to a backlash where some theorists[6] have attacked phenomenological approaches to explore the lived world of people who are ill or have a severe intellectual disability.

The approach of treating phenomenology as an alternative to scientific approaches is radically wrong-headed. When we are trying to understand the lived world of a person with intellectual disabilities we are adopting personal level explanations of their behaviour. This is how we interact with people in ordinary lived experience.  When I arrange to meet a friend for a coffee, I will use this personal level mode of explanation. Thus I will use propositional attitude level type reasoning. I will assume that because my friend knows I will be in a particular coffee shop at a particular time, and he wants to meet me, has agreed to meet me at said time, and knows how to get to the shop, he will therefore meet me unless something happens to prevent him from meeting me. We use this personal level type explanation all of the time in our interactions with others and it works quite well[7]. When dealing with children who have an intellectual disability we are dealing with persons some of whom are perfectly healthy aside from whatever developmental delay they have. Personal level interpretations are both appropriate and useful. Adopting a phenomenological approach is to interpret the behaviour of the child interms of personal level ascriptions though careful phenomenological analysis will sometimes reveal a more detailed picture than our ordinary folk intuitions do. This is not an alternative to scientific analysis of behaviour. It is just an approach which is useful and humane.

If we are trying to teach a child better functional communication a lot of the time behavioural techniques which occur at the sub-personal levels are superior to personal level techniques. Likewise if we are dealing with severe challenging behaviour, a combination of medical treatment and behavioural analysis usually works much better than any treatment at the personal level. But all of this is true of people who do not suffer from intellectual disabilities. A person with no intellectual disabilities may suffer a bout of depression and treatment interms of medicine may work better than personal level psychotherapy. Here we are using a sub-personal level explanation to help the person. The same will be true if a person with or without Intellectual disabilities gets sick with MS or any other disorder.

Personal level ways of interpreting people with intellectual disabilities are not an alternative to scientific explanations they are just standard ways of understanding how a person finds meaning in the world and particular socio-linguistic community they find themselves in. As always we can shift our levels of explanation as pragmatically needed. And leave the question of whether propositional level explanations will eventually be eliminated to future science to decide.

[1] Hence forth ‘Philosophical Investigations’ will be referred to as PI.

[2] Richard Rorty discusses the relation between the early Heidegger and the later Wittgenstein in his ‘Wittgenstein, Heidegger, and the reification of language’ (1991)

[3] Here my comments on care are restricted to practices in present day Ireland. Different countries have different procedures and practices and it would be a mistake to generalise my comments too widely without doing a lot of comparative research.

[4] Lacan whom I disagree with a lot is actually quiet good at emphasising the nature of family dramas a child is thrown into at birth.

[5] See for example Wilding and Whiteford ‘Phenomenological Research: An Exploration of Conceptual, Theoretical, and Practical Issues (2003), and Clarke (2005) ‘An Introduction to Phenomenological Analysis: A Useful Approach for Occupational Therapy Research’, as well as Crotty (1996) ‘Phenomenology and Nursing Research’.

[6] For criticisms of the phenomenological approach in nursing see John Paley ‘Phenomenology as Rhetoric’.

[7] In his 1981 paper ‘Eliminative Materialism and Propositional Attitudes’ Paul Churchland argued that we may end up eliminating propositional attitude level explanations of behaviour as we learn more about neuroscience. He may be correct on this point but at the present time we are nowhere near this point, and there is little reason to deny ourselves the tool of personal level ascriptions.