Monthly Archives: February 2014

Chomsky and Quine on Analyticity part 2

CHOMSKY AND QUINE ON ANALYTICITY PART 2

ANALYTICITY NATURALISED?

                                    SECTION 1: Types of Analyticity

In Part 2 I discussed Quine’s arguments against the notion of analyticity. It was noted that Quine was attacking three notions, (1) Analyticity as a defence of the a priori (2) Analyticity as a form of necessity, (3) Analyticity as a purely semantic truth independent of collateral information. Chomsky’s claim that an analytic/synthetic distinction was a fact of natural language was then discussed. Throughout the discussion I treated Chomsky’s arguments as pertaining to semantic analyticity. By semantic analyticity I mean analytic truth as being true in virtue of the meaning of the terms and independent of collateral information. In this Part I will consider whether Chomsky’s positive conception of analyticity can support a priori knowledge, and our grasp of necessary truths.

As we have seen above, when Chomsky is discussing language and mind he is doing so in a thoroughly naturalistic manner. Our language results from an innate module of the mind which he attempts to explain in a purely naturalistic manner. Chomsky tells an evolutionary story about the origins of language (See for example Chomsky, Fitch, and Hauser 2002).  He argues that any analytic connections in natural language result from the internal structure of the innate language faculty. He also claims that human concepts have idiosyncratic features which were of no particular use to our ancestors so it is unlikely that they were formed through natural selection. He assumes that if our innate conceptual abilities were not formed through natural selection then they must have resulted from a random mutation. Obviously, conceptual abilities which resulted from a random mutation will in no way provide us with any kind of a priori justification.

When discussing the rules which govern our language, Chomsky has been quite explicit on the fact that we are not justified in applying the rules of natural language.  He argues that we just automatically implement them in the way a computer implements a programme:

In short, if I follow R, I do so without reasons. I am just so constituted. So far, these conclusions offer no challenge to the account discussed earlier. I follow R because S` maps the data presented into S*, which incorporates R; then I apply R blindly. There is no answer to the Wittgenstein Sceptic and there need be none. My knowledge, in this instance, is ungrounded….I have no grounds for my knowledge in any useful sense of the term and no reasons for following the rules: I just do it. If I had been differently constituted with a different structure of my brain, I would have followed a different rule. (1986b, 225)

 

In the above passage, Chomsky is responding to Kripke’s rule following paradox. He is claiming that he does not have a solution to the paradox. He cannot say why calculating according to the plus function is justified whereas calculating according to the quus function is not justified. His answer is to point to the fact that we do just calculate one way as opposed to the other way, and to explain this fact by invoking innate constraints on the type of mathematical rules we are capable of following. The innate structure of our brain means that we interpret ‘+’ in terms of ‘plus’, however if our brain was structured differently we may well interpret ‘+’ in terms of ‘quus’. I will not here discuss Chomsky’s solution to the rule following paradox; my main aim is to show that his answer to Kripke shows the folly of reading Chomsky’s conception of analyticity as an epistemological conception.

Chomsky has explicitly claimed that people are born with innate concepts some of which will as a matter of empirical fact yield analytic connections. He has furthermore claimed in a number of places, that concepts of natural language do not refer to entities in the mind-independent world in the manner in which people like Quine, Frege, and Tarski believe they do. This being so, it is clear that on the Chomskian picture concepts like BACHELOR and UNMARRIED MEN do not mean the same thing because they pick out the same mind-independent entity, rather, they both mean the same thing because they both pick out the same concept.

This view of course puts Chomsky in a very weak position. It means that we cannot justify our sentences as being true by virtue of the meaning of the terms involved. Instead we just automatically categorise them so, and if our brain was constituted differently we would categorise them differently. A number of points follow from this immediately. The types of analytic connections which Chomsky defends have no epistemological significance. The analytic connections he is concerned with are not connections which justify us in holding certain beliefs true.  So it could be argued that the analytic sentences which Chomsky is concerned with are of a philosophically uninteresting type.

However such an interpretation is complicated by the following quote from Chomsky.

These are truths of meaning, not of fact. The a priori framework of human thought, within which language is acquired, provides necessary connections among concepts, reflected in connections of meaning among words and, more broadly, among expressions involving these words… (2000b, 63)

 

When Chomsky talks about an a-priori framework of human thoughts and about necessary connections amongst concepts he seems to be making claims which his explicit theory would not allow. No contingent fact about our brains would justify us holding a belief true, while it is likewise highly unlikely that a contingent feature of our brain structure would somehow put us in touch with a necessary truth about the universe.  So it seems that Chomsky’s claim about humans being born with analytic connections amongst concepts is inconsistent with his claim that analytic truths provide a priori justification or knowledge of metaphysical necessity.  Throughout the remainder of this section I will endeavour to reconcile these seemingly inconsistent views.

Given that Chomsky has claimed elsewhere that our syntactic rules are not justified but are merely followed blindly, it is obvious that we should not interpret Chomsky’s claims about an a priori framework of human thought epistemically.  Rather we should instead interpret an a priori framework of thought to mean an innate framework of thought. But this of course leads to the question of what Chomsky means by necessary connections amongst concepts? Consider the concepts: 2, 4, +, = these can be combined to create the necessary truth 2+2=4. Now philosophers such as Hume and Ayer have claimed that the truths of arithmetic are analytic truths. Given Chomsky’s various discussions of analyticity, and his views on concepts expounded in his reply to Kripke, it seems that Chomsky also considers the truths of arithmetic as analytic, in the sense of being true based on the meaning of the terms involved. But it seems that this type of analyticity is not strong enough to support metaphysical necessity.

In order for our analytic truths to yield metaphysical necessity, there would have to be some sort of correspondence between the truths of meaning which are encoded in the brain and necessary truths which exist in an unchanging mind-independent world. If the connection were a mere contingent one, then it would have been just the result of time and chance. To grasp the distinction between contingent and necessary truths using the framework of possible worlds is sometimes helpful.  According to Chomsky, it is a fact of the human brain that some of the concepts we are born with will automatically yield analytic connections. Surely, though, this being so, it is also possible that the human brain could have been constructed differently. It is obviously possible that humans could have been born with innate concepts but no analytic connections between their concepts: call such creatures Fodorsapiens. If we think of the analogy between male and females who are both members of the species Homosapien, it is also possible that there were two different types of Homosapiens, who were distinct from each other but could reproduce with each other; call them Chomskysapiens and Fodorsapeins. Both strands of Homosapiens are descended from the same ape ancestors and they just diverged from each other because of a random mutation which did not affect their ability to reproduce with each other. We can imagine in our story that Fodorsapeins eventually died out because their slightly different DNA made them unable to fight of the common cold. The point is that there are many possible worlds which can be constructed in which it was Chomskysapiens who died out, or in which a random mutation changed which sentences the Chomskysapien finds analytic. The truths of meaning which Chomsky claims are component of the mind brain seem to be nothing more than contingent results of our evolutionary history, and his claim that they provide necessary connections remains unargued.

So it would seem that since the brain is the contingent product of its evolutionary history, then any concepts it is innately born with will correspond with a mind-independent reality, if at all, only coincidently. Here, of course, the type of analytic truths will be of paramount importance. If we are to count the truths of arithmetic as analytic, then we can sketch a story of how an innate number sense would be vital to survival. We could cite how a creature who could not calculate whether the number of foes was equal to the number of friends would not survive very long. We could point to creatures such as monkeys and dogs who indicate a grasp of a number sense to show how this sense is probably a universal feature of any creature who wants to survive in a hostile environment.

When one tries to explain other analytic truths such as ‘All bachelors are unmarried men’, or ‘All dogs are animals’, sketching a story in terms of survival value seems more difficult. The idea of ‘All bachelors are unmarried men’ being a necessary truth or a truth which is justified independent of the facts seems unintelligible. However the idea that the sentence is analytic because ‘bachelor’ and ‘unmarried’ correspond to the same mentalese concept is obviously intelligible.  So the question we are faced with is the following: since analytic connections are not limited to the language faculty, what is their place in the overall taxonomy of the mind which Chomsky has constructed so far?

In a personal communication Chomsky tried to clarify some of these questions for me. He claimed that:

In this context, “a priori” means what Konrad Lorenz called “biological a priori,” that is, part of the cognitive equipment with which an organism organizes and approaches external data, mapping it into experience, and then interpreting it.  The issue of justification doesn’t arise, any more than it does in the case of “book” rhyming with “took” (in my speech).  This doesn’t conflict with Boghossian. He is talking about something different. (Chomsky: personal communication).

 

So Chomsky here agrees with my analysis of his views on analyticity. His views offer nothing towards helping us clarify such notions as a priori knowledge and necessary truths.

Even if Chomsky is correct that people are born with an innate system of concepts which yield analytic connections, this fact will still leave him and Quine in agreement on some central points.  Quine was criticising how philosophers had used analyticity as a philosophically neutral explanation of a priori knowledge and necessary connections. He wanted to show that the only type of analyticity he could make sense of, stimulus analyticity, could not explain the philosophically meatier conceptions of the a priori and necessity. Chomsky’s alternative conception of analyticity, like Quine’s, does not purport to explain our grasp of necessary truths, nor any supposed a priori knowledge. So both thinkers are in agreement on this point at least. However, as we have already seen, Quine was not only arguing against analyticity in the sense of an explication of the a priori and necessity. He was also arguing against the semantical notion of analyticity. So one clear area of disagreement between both thinkers centres on whether there are analytic connections in the sense of sentences whose truth is fixed purely in terms of meaning independent of extra-linguistic fact? Quine gave arguments against this in the first four sections of ‘‘Two Dogmas’’, and these are arguments which Chomsky never managed to answer. Furthermore, Chomsky has offered little evidence in support of his positive conception of analyticity. So it is clear that Chomsky has not really answered Quine’s objections against the idea of an analytic/synthetic distinction.

Since Chomsky’s conception of innate analytic connections in natural language and Quine’s conception of stimulus analytic connections rest on such radically different pictures of how language is learned it would be foolish to say that they are in agreement on the nature of analyticity. However, it is significant that both thinkers are in general agreement as to the fact that analytic sentences do not explain a-priori knowledge, or necessary truths. So the debate between them on analytic sentences is an illusory one when it comes to metaphysics and epistemology. There is, however, a substantive debate between the two thinkers as to the nature of analytic sentences as a part of natural language and I will return to this point later in the essay.

 


                     SECTION 2: FACULTY PSYCHOLOGY AND ANALYTICITY

 

Chomsky has at various different times spoken about the language faculty, a science forming faculty, a folk physics faculty, and a moral faculty. To say that we have a faculty is to say that we have domain specific knowledge of something, and that this knowledge has not resulted from a general inductive procedure. A clear indication of a faculty operating in the brain is when as result of a disease or an accident, a person loses a certain ability (e.g. linguistic ability) but their general intelligence remains unimpaired. So, for example, people such as Marc Hauser have argued for the existence of a moral faculty based on the fact that selective brain damage results in people losing their moral sense but retaining their overall general intelligence. Likewise, Chomsky has discussed cases of people losing their linguistic abilities without it having any effect on their overall general intelligence.

Another key reason to believe that some abilities are the result of a faculty is that the ability arises at set times in the person’s development. Furthermore the ability arises in people at more or less the same time, independently of their general intelligence. Again Hauser argues for the existence of a moral faculty based on the fact that children uniformly answer tests in a similar way when at the same age of their development.

There is strong evidence that children come equipped to the world with intuitive folk physics and folk psychology. The fact that these two different domains of knowledge are not commensurate indicates that they may belong to different faculties of the mind. And there is independent evidence that folk physics and folk psychology are faculties of the brain. A person’s folk psychology does develop at set times in the child’s development independently of general intelligence. So, for example by the age of four children can typically pass the false belief test with the notable exception of autistic children, most of whom fail the test at this age. The fact that autistic children fail this test is instructive because they typically show no deficit in general intelligence. Furthermore, children who do in fact have a lower IQ, such as children with Down syndrome, manage to pass the test by the age of four. This shows that we may be justified in claiming that folk psychology is a distinctive faculty of the human brain.

Overall I would claim that there is some reasonable evidence (though far from overwhelming) to support the postulation of a faculty for folk physics, folk psychology, morality, language, amongst others. When Chomsky claims that humans are born with innate concepts which will automatically yield analytic connections he fails to detail the nature of these concepts, and whether these concepts are unique to particular faculties. Consider the following purported analytic statements: (1) All ducks are animals. (2) If John persuaded Bill to go to school then Bill decided/intended to go to school and did so without being under duress.  (3) Three times five is half of thirty.  It is a fair bet that most people would consider these statements to be analytic, though it must be admitted that some people may answer 3 by saying it is a priori synthetic. Let us assume for a minute that most people would consider all three sentences analytic. The question is: where, for a Chomskian, does their analyticity derive?

The subject matter of 1 is biology, 2 folk psychology, 3 is mathematics. All three statements are, to use the traditional definition, true by virtue of the meaning of the terms and not because of extra-linguistic facts. For Chomsky, as we have seen, the meaning which creates these analytic connections is something we are born with, not something which is true by convention. The three different statements contain concepts of agency, number, and biological entities. What we do not know is if those concepts grow in the brain separately from the different faculties, or whether they grow internal to the different faculties. So, for example, if a mathematical truth such as 2+2=4 grows internal to a faculty of mathematics, then damage to this faculty would not have any significance for our ability to understand analytic truths which grew internal to other faculties. This is a question which Chomsky has not in general been very clear on.  He correctly notes that more empirical research is required before we can decide on what the correct answer to the question is, and that it is empirical research that is necessary, not a priori philosophical speculation.

Whatever the exact structure of the various faculties are, Chomsky thinks that if there are analytic connections in different domains they will emerge internal to the faculties. Chomsky is explicit on this point in a personal correspondence with me:

 

Given my language faculty, it’s a necessary truth that “book” rhymes with “took” and that “chase” entails “follow.” Horwich is quite right.  This has no bearing on, say, whether arithmetic truths are synthetic a priori… On the Hume questions, we have to distinguish what our innate sense of number determines, and what is the truth about numbers (if we regard them as entities that exist independently of our cognitive capacities, as Platonic entities of some sort).  The two presumably coincide, but we can imagine an organism in which they would not (and maybe that’s us, as a matter of fact). (Chomsky: personal communication)

 

One important point to note about this communication is that he here admits a kind of scepticism about the truths of mathematics. He claims that while it is possible that our innate number sense does correspond to mind-independent Platonic entities, we cannot say for certain that they do. Now, given that our best scientific theories about the world are couched in mathematical formulae, this is a startling admission. Chomsky is claiming that we have no justification for acting in accord with our innate number sense, and that we have no way of knowing whether our innate number sense corresponds with any facts in the objective world.

Furthermore, Chomsky is arguing that different types of analytic connections will emerge internal to our various different faculties. So we will have analytic truths derived internal to our innate number sense, and analytic truths derived internal to our language faculty etc. This means that Chomsky’s view on the nature of analyticity is parasitic on his views on the various different faculties of the mind. It also shows that truths which are typically grouped together by philosophers as analytic derive their analyticity internal to particular faculties. So Chomsky’s justification for arguing that analytic truths exist depends on accepting his controversial views on (1) innate concepts, (2) faculty psychology, (3) potential radical scepticism about the truths of mathematics. Chomsky’s conception of analyticity seems to ask more questions than it answers.

                   SECTION 3: CHOMSKY AND QUINE ON ANALYTICITY

In the previous sections we established that neither Chomsky nor Quine have a conception of analyticity which is rich enough to provide a priori justification and evidence of necessity. However, I also noted that, despite having this much in common, their views on the nature of analyticity diverge radically. In this section, I compare the two views and consider which view deals with the available data better.

Having already explained Quine’s negative critique of analyticity at the beginning of this chapter, I will not repeat the material here. In this section I will focus on how Quine explains away people’s intuitions of analyticity.

As is well known, Quine criticised the notion of analyticity as it was used by philosophers from Descartes to Ayer. Quine, of course, did not rest with such negative criticisms; he also explained why people held such intuitions of analyticity in the first place. Quine was well aware of experimental work which showed that people had an intuitive conception of analyticity. However, for Quine, such intuitions do not provide evidence that analytic connections actually exist in natural language. He notes that analytic intuitions typically set in where people have difficulty figuring out what the person who denies the truth of such a sentence may mean by its denial. This reaction is perfectly consistent with Quine’s web of belief story. A person who denied something which is deeply embedded into our web of belief would indeed result in an interpreter having difficulty figuring out what the person meant by the denial.

We saw above that Putnam criticised Quine’s blanket criticism of the notion of analyticity. Putnam correctly argued that the web of belief story does not account for trivial cases of analyticity. In Word and Object Quine told a story about analyticity which, he argued, was largely consistent with the story told by Putnam.

Quine noted that terms like ‘‘Indian nickel’’ and ‘‘Buffalo nickel’’ are not stimulus synonymous while ‘‘bachelor’’ and ‘‘unmarried men’’ are. Quine explains this disparity by noting how the different terms are learned. He argues that terms like ‘‘Indian nickel’’ and ‘‘Buffalo nickel’’ are learned by association with non-verbal stimuli. Terms like ‘‘bachelor’’ and ‘‘unmarried men’’ are learned by association with verbal stimuli. So, for example, one learns the meaning of ‘‘bachelor’’   by learning to associate it with ‘‘unmarried man’’. Obviously from a social point of view the stimulus meaning of the term ‘‘bachelor’’ will vary from person to person. A child can be told the meaning of the word ‘‘bachelor’’ by ostensive definition; however, obviously, the particular person who is being pointed to will not be representative of the class of people in the world who are bachelors. So a child who learned the meaning of ‘‘bachelor’’ this way would be unable to generalise the term using this technique. Even if they pointed to a vast number of different bachelors and named them the child would not be able to generalise the term because there are no particular characteristics shared by bachelors alone other than being unmarried men. If an adult wants to teach a child what a bachelor is he can do no better than tell the child that bachelors are unmarried men. In the case of the terms ‘‘Indian nickel’’ and ‘‘Buffalo nickel’’ things are different; we learn these terms through ostension.  If we learn that ‘‘Indian nickel’’ and ‘‘Buffalo nickel’’ pick out the same object, we again do this inductively. Quine argues that some technical terms of science do not have a socially fixed stimulus meaning to govern how they are used. In this sense, Quine argues that they are like terms such as ‘‘bachelors’’ and ‘‘unmarried men’’. There is, however, one difference between the technical terms of science and terms like ‘‘bachelor’’; ‘‘bachelor’’ gets its meaning by being connected to ‘‘unmarried man’’, while in the case of technical terms their meaning is governed by their relation to a whole scientific theory.

Quine’s discussion of analyticity above is largely consistent with the story told by Putnam. Quine argues that a sentence like ‘‘All bachelors are unmarried men’’, appears to be analytic because we learn the meaning of ‘‘bachelor’’ through associating it with ‘‘unmarried men’’. It is not actually analytic in the sense of being immune to revision, nor does it provide us with any kind of a priori justification. The sentence merely has the appearance of being analytic because of its mode of being learned. As language evolves the sentence ‘All bachelors are unmarried men’ may no longer be considered analytic though in terms of present usage it certainly gives the appearance of being analytic. Likewise Quine’s web of belief story explains our grasp of less trivial sentences which have the appearance of being analytic, though as in the ‘bachelor’ case, he again accepts that such sentences are subject to revision as our language and theories evolve.

Like Fodor, and unlike Putnam, Quine does not beg any questions because he is not trying to explain the existence of analytic truths by using one-criterion concepts.  Quine is merely arguing that since our manner of learning some concepts will be through one-criterion concepts this explains why some people have analyticity intuitions. Ultimately, Quine argues that there is no sensible way of explicating an analytic/synthetic distinction. For this reason he argues that we have no reason to believe that such a distinction exists in language. He acknowledges that people have an intuition of analyticity but he explains away this intuition through his notion of one-criterion concepts.

Overall then Chomsky and Quine are agreed on certain points. Neither thinker argues that any sense can be made of the notion of analyticity as a defence of a priori justification, or of our supposed grasp of necessary truths. Both thinkers, however, do manage to offer at least some explanation of why people have an intuition that a distinction between analytic and synthetic truths exists. Chomsky’s explanation of analyticity is sketchy and vague and has little evidential support. He claims that people are born with innate concepts which as a matter of fact yield analytic connections; however, he does not offer empirical evidence to support this claim. Furthermore, his claims about analyticity commit him to the view that analytic connections exist in different forms in different faculties. So his purported explanation of analyticity relies on prior commitments to faculty psychology, and while this commitment may turn out to be justified, it is very much an open question. More importantly, his views on the analyticity of mathematics yield a type of scepticism which cannot be taken seriously. Quine’s arguments against the notion of an analytic/synthetic distinction were not refuted by Chomsky. His explanation of why people have an intuition of analyticity relies on a conception of language acquisition which he has not shown to be correct. The question whether Quine’s conception of people’s intuitions of analyticity is correct or not is parasitic on whether his conception of language is correct or not. At this moment in time the evidence does not indicate whether Quine is correct on this point or not. What is clear is that Chomsky’s positive arguments for the existence of an analytic/synthetic distinction have little evidence in support of them.

 

CHOMSKY AND QUINE ON ANALYTICITY PART 1

    CHOMSKY AND QUINE:THE ANALYTIC AND THE SYNTHETIC: PART 1

                                SECTION 1: THE OBJECT OF ATTACK

             Despite the fact that Chomsky and Quine are both methodological naturalists, they nonetheless disagree on a number of issues.  In this blog I will consider one such disagreement: their differing views on the status of the analytic/synthetic distinction. I will show that this dispute between them is an entirely naturalistic dispute. Traditionally, philosophers who used analyticity in their theorising did so in the service of a non-naturalistic philosophy. Such thinkers used analyticity as an explanation of non-naturalistic notions such as a-priori knowledge and our grasp of metaphysical necessity. One of my main reasons for discussing the dispute between Chomsky and Quine on analyticity is to illustrate a disagreement between both thinkers which, despite superficial appearances is entirely naturalistic. I will evaluate the evidence put forth by both thinkers and show that Chomsky has not offered sufficient evidence to justify his claim that analytic connections exist in natural language.

Quine’s attack on the analytic/synthetic distinction in his Two Dogmas of Empiricism is probably one of the most famous papers in the history of analytic philosophy. However, while the paper has been extremely influential, it has long been noted that the object of Quine’s attack was unclear even to Quine himself. In Two Dogmas Quine seemed to be attacking three different sources (1) Truth in virtue of meaning; (2) Necessary Truths; and (3) A Priori Knowledge. This fact was understandable because at the time Quine wrote Two Dogmas most philosophers believed that an analytic truth was a truth which was both an a priori truth and a necessary truth. Logical Positivists had argued that the only sensible way to account for supposedly a priori truths or necessary truths without postulating some mysterious faculty of intuition was to do so in terms of analyticity. So it is understandable that Quine would have believed that by attacking analyticity he was also attacking the a-priori and necessity.

Post Saul Kripke few philosophers would conflate analyticity, necessity, and the a-priori. It is worth bearing in mind the above distinctions when evaluating Quine’s arguments because doing so will help us to key in on what his arguments are directed towards, and on whether his arguments are effective. The a-priori/a-posteriori distinction is an epistemological distinction. The necessary/contingent distinction is a metaphysical distinction. The analytic/synthetic distinction is between purely semantic truths and truths which rely on semantic facts and extra-linguistic information. In the 1950’s, Quine, like most philosophers, believed that these distinctions lined up. He furthermore believed that, since analyticity was the only way of explicating the a-priori and necessity, then by attacking analyticity he was attacking the other distinctions. Quine was not merely concerned with attacking analyticity in so far as it was used as an explanation of the a-priori and necessity, he was also concerned with attacking the very notion of a purely semantic truth. He believed that truth by virtue of the meaning of the terms was a senseless notion. I will now discuss his actual arguments against analyticity in Two Dogma’s focusing firstly on his arguments against the semantical notion of analyticity.

 

               SECTION 2: TWO DOGMAS OF EMPIRICISM: A SUMMARY

When Logical Positivists spoke of analytic truths they included truths which we would group as necessary truths, a priori truths, and purely semantic truths. Under the banner of analytic truths one would find philosophically important notions such as truths of logic, truths of geometry and arithmetic, as well as trivial semantic truths. So, for example, one would find complex mathematical statements grouped under the same category as a trivial statement such as ‘All bachelors are unmarried men’. Quine who was working within this logical positivist tradition grouped trivial analytic statements together with more philosophically important analytic truths.

In the first four sections of ‘‘Two Dogmas’’ Quine was critiquing the very notion of an analytic truth. Quine argued that we could make sense of what he called Logical Analyticity, an example of which is (1) ‘All bachelors are bachelors’. However he stated that he could make no sense of analytic truths which results from substituting synonyms for synonyms, for example (2) ‘All bachelors are unmarried men’. We will call analytic truths derived by substituting synonyms; Meaning Analyticity. Quine argued against Meaning Analyticity because he could find no cogent way of explicating what it amounted to. Any attempt to define meaning analyticity relied on using notions which themselves could only be made sense of if we helped ourselves to the prior notion of meaning analyticity. However, since analyticity was the very notion which we were trying to define, if the other terms which we use to define it themselves rely on the notion of analyticity to be made sense of, then our explanation will be circular. Quine’s objection to the notion of analyticity is usually called his ‘circle of terms’ argument. We cannot define analyticity without terms such as ‘‘synonymy’’, ‘‘necessarily’’, ‘‘meaning’’ etc. and these terms can only themselves be defined in terms of the ill-understood notion of analyticity. So Quine argued that, since we cannot define analyticity in a non-circular manner and the notion makes no sense behaviourally, then we have no justification to argue that analytic sentences, as traditionally conceived, exist.

The primary aim of the first four sections was to criticise the notion of meaning analyticity. In these sections Quine was concerned with attacking the notion of purely semantic truth. In the final two sections Quine is attacking what he takes to be the dogma of reductionism. He begins by attacking the verification theory of meaning. He notes that, according to the verification theory of meaning, the meaning of a statement is the method of confirming or infirming it (1951,37). He then notes that an analytic statement is that limiting case which is confirmed no matter what (ibid., 37). Quine claims that it is possible to argue for a conception of analyticity using the verification theory of meaning. One could say that two statements are analytic if they have the same conditions of verification.

Quine rejects the definition analyticity in terms of conditions of verification because of deficiencies which he sees in the verification theory of meaning. He famously argued that the unit of empirical significance is the whole of science (ibid., 42). He pointed out that all empiricist attempts at reduction had failed. Furthermore, he noted that such failures were failures in principle; all such attempts relied on undefined terms to construct their reduction. It was because of this failure that Quine argued for his famous web of belief.  When cashing out the nature of the web of belief Quine claimed that: ‘our statements about the external world face the tribunal of experience not individually but as a corporate body’ (ibid., 46).

An important consequence of this holism is its effect on the nature of falsification of our scientific theories. Since our total theory of the world is an interconnected web of belief, and this web faces the tribunal of experience as a corporate body then a falsification of a prediction made by the theory will reverberate throughout the theory. However, Quine notes that if we are faced with a supposed falsification of a prediction, how we modify the theory which made the false prediction is to be dictated by pragmatic considerations.  So, for example, since mathematical and logical principles are so deeply connected to every area of our scientific theories they will be the last thing that we will modify if our theory makes a false prediction. Nonetheless there is no reason in principle that we cannot revise mathematical or logical laws if it seems the best way to accommodate recalcitrant experience.

Laws of mathematics have traditionally been viewed as being special in that they could not be explained empirically[1].  Kant claimed that mathematical laws were a priori synthetic truths, because, while they were a priori true, they were not analytic truths. An analytic truth, was for Kant, a statement where there was nothing in the predicate concept that was not already contained in the subject concept. However, Kant noted that in the case of mathematical truths we could not explicate them in terms of analyticity. So, for example, if we consider ‘2+2=?’, according to Kant, no analysis of preceding concepts will give us the answer ‘4’. Kant claimed that if we want to discover an answer in maths we have to construct it in intuition, merely analysing concepts will get us nowhere.

Logical Positivists argued, contrary to Kant, that mathematical truths could be reduced to logical truths. They recognised two types of truth, analytic truth and synthetic truth. Mathematical truths, like truths of logic, were viewed by them as analytic truths.

Quine’s web of belief argument, with its emphasis on the fact that mathematical and logical truths are revisable in principle but in practice are rarely ever revised, is pertinent to the considerations of Kant and the Logical Positivists. To explain the special status of mathematics Kant had to postulate a mysterious kind of knowledge (a priori synthetic knowledge). Logical Positivists tried to explain mathematics in terms of logic which was in turn explained by linguistic convention. Quine demonstrated that the doctrine of truth by convention was dubious because in order to consistently apply any logic we would need to presuppose a logic prior to such an application (1934). While Kant’s postulation of a priori synthetic knowledge was never made sense of in a non-mysterious way, Quine’s web of belief picture explained why mathematical and logical truths seemed to have a special status in our overall theory of the world. Furthermore, it managed to avoid the difficulties which plagued the pictures of the Positivists and Kant.

Quine’s web of belief argument shows that we do not need to appeal to analytic truths in the sense of truth in virtue of meaning to explain the apparent specialness of mathematics and logic. It furthermore shows that we do not need to appeal to a-priori synthetic knowledge to explain what is apparently special regarding mathematics. Quine’s picture of mathematics and logic being deeply embedded into our total theory of the world explains their apparent specialness adequately. Furthermore, by admitting that basic laws of mathematics and logic can be revised, he is making sense of the history of science in a way that people like Kant do not.

However, as Putnam pointed out in his paper ‘‘The Analytic and the Synthetic’’, while Quine’s argument for the web of belief is persuasive as an explanation of  non-trivial analytic sentences, it is far from persuasive when it comes to trivial analytic sentences. While it is sensible to explain away truths of mathematics conceived as analytic truths by appeal to them being deeply embedded in our web of belief, it is not sensible to claim that a trivial statement such as ‘All bachelors are unmarried men’ is deeply embedded into our web of belief. I will explore this weakness with Quine’s explanation of analyticity later in Section 4. Firstly I will explicate Chomsky’s difficulty with Quine’s claim that there is no sensible distinction to be drawn between analytic and synthetic sentences.

SECTION 3: CHOMSKY’S DEFENCE OF THE DISTINCTION

It appears, then, that one of the central conclusions of modern philosophy is rather dubious: namely, the contention- often held to have been established by the work of Quine and others- that one can make no principled distinction between questions of fact and questions of meaning, that it is a matter of more or less deeply held belief. (2000b, 63)

 

As a linguist, Chomsky was primarily with whether an analytic/synthetic distinction was, as a matter of empirical fact, a distinction that ordinary speakers of natural language recognised. He argued contra Quine that analytic connections do exist in natural language, and that it is the job of the cognitive scientist to study such connections. As in the case of syntax, we begin with people’s intuitions; in the case of analytic connections, we need to test whether people can reliably distinguish sentences which are analytic from sentences which are synthetic.  So the question arises: would an ordinary speaker of English be able to distinguish between an analytic sentence like ‘All Bachelors are Unmarried men’ from a synthetic sentence like ‘All Bachelors like Seinfeld’? For Chomsky’s claim that analytic connections are a fact of natural language to be shown to be correct, experiments need to be conducted. However, experimental research on the analytic/synthetic distinction has not exactly been overwhelming. There have been some experiments on the distinction, for example: Apostel, L., W. Mays, A. Morf, and J. Piaget. Les liaisons analytiques et synthetiques dans les comportements du sujet. Arne Naess. Interpretation and Preciseness, as well as Katz and Fodor’s paper, ‘‘The Structure of a Semantic Theory’’. These studies do imply that people have a gradualism of statements that they find intuitively synonymous. And this view seems to support Chomsky’s claim that people do have intuitive conceptions of what sentences in natural language are analytic and what ones are synthetic.

Chomsky argues that since the experiments show that people intuitively distinguish between analytic and synthetic sentences, then we need to explain this fact. His explanation centres on poverty of stimulus considerations. He argues from the speed that children acquire words (about 12 words a day at peak periods), and the incredible complexity of the words acquired, to the claim that the children must be merely labelling concepts they are born with. From his argument for innate concepts, he draws the following conclusion:

This would appear to indicate that the concepts are already available, with much or all of their intricacy and structure predetermined, and that the child’s task is to assign labels to concepts, as might be done with limited evidence given sufficiently rich innate structure. And that these conceptual structures appear to yield semantic connections of a kind that will, in particular, induce an analytic-synthetic distinction, as a matter of empirical fact. (2000b, 62)

 

On the face of it, this argument simply does not work.  Chomsky’s poverty of stimulus argument for concepts relies on the assumption that when children acquire words this ability indicates a grasp of a complex concept which the word means. So, for example, it implies that when a child acquires a word such as ‘Mama’, the child understands the same complex concept which adults do when they explicate what the concept MAMA means. Chomsky has never provided any evidence to support this wildly implausible claim. If we take away the unproven claim that the words children are learning at peak periods of language acquisition express full-blown adult concepts, then the acquiring of 12 words a day is less impressive than Chomsky claims. If a child using a word such as ‘mama’ had only a simple understanding of what it meant, then the fact that the child was learning such words so fast would not seem so miraculous. And there would therefore be no need to postulate innate concepts to explain the speed of word acquisition.

While there is some evidence that children from as young as four months of age demonstrate some conceptual understanding of concepts such as OBJECTS, CAUSATION and AGENT, this evidence does not support Chomsky’s more radical claims about concepts. I will discuss the evidence for children’s conceptual abilities in chapter three; here I will merely note that it is at best suggestive. The evidence does show that children have some concepts prior to learning language. However it does not conclusively demonstrate that these concepts are innate, nor that children possess most of their concepts prior to learning language. Chomsky is making much more radical claims than that some abstract concepts such as object, or causality are innate. In a personal communication with me he made the following claims:

Also, there is good evidence that innateness of concepts goes far beyond the philosophically interesting examples that you mention.  It’s hard to imagine how else people could acquire the meanings of the simplest words in the language — “river,” “tree,” “person,”….  They have rich properties for which there is no empirical evidence for the child. (Chomsky: Personal Communication)

 

Furthermore in his New Horizons in the Study of Language and Mind, Chomsky made even more radical claims about the innate conceptual abilities of Children. Chomsky considered an argument from Putnam (1988) that evolution could not have equipped us with an innate set of concepts including ones like Carburettor and Bureaucrat. Putnam argued that evolution could not have equipped us with these concepts because in order to do so evolution would need to be able to anticipate all future contingencies. Chomsky replied to this by noting that Putnam’s argument is incorrect because we do not need to assume that evolution anticipated all possible contingencies, just the particular ones under question.

Chomsky claimed that a similar argument to Putnam’s was used in immunology and  that recent work by Niels Kaj Jerne challenges this idea:

Notice that a very similar argument had long been accepted in immunology: namely, the number of antigens is so immense, including every artificially synthesized substances that had never existed in the world,  that it was considered absurd to suppose that evolution had provided an ‘innate stock of antibodies’, rather formation of antibodies must be a kind of ‘learning process’. But this assumption might well be false. Niels Kaj Jerne won the Nobel Prize for his work challenging this idea… (ibid, 65)

 

This analogy proves nothing, it is not in doubt that it is possible that people are born with an innate stock of concepts like CARBURETTOR and TREE. What is in doubt is whether there is any evidence that supports the claim that such concepts are innate. Chomsky has provided none whatsoever here. Chomsky vaguely gestures in the direction of poverty of stimulus considerations to support his conjecture about innate concepts:

Furthermore, there is good reason to suppose that the argument is at least in substantial measure correct even for such words as Carburettor and Bureaucrat, which, in fact, pose the familiar problem of poverty of stimulus if we attend carefully to the enormous gap between what we know and the evidence on the basis of which it is known. (2000b, 65)

 

Here Chomsky is using a poverty of stimulus argument similar to the argument he uses in the area of syntax. I will discuss the poverty of stimulus argument in the area of syntax in chapters two and four. The poverty of stimulus argument in syntax has the merit of being precise and testable. Chomsky has provided no such explicit model of a poverty of stimulus argument for innate concepts. So we have no reason to believe Chomsky’s radical claims about innate concepts until more evidence is provided.

It is of course possible that Chomsky is correct that children are born with innate concepts such as: CARBURETTOR, TREE, BUREAUCRAT, RIVER, etc.; however an incredible amount of evidence is needed to support such an incredible claim. Chomsky vaguely points to poverty of stimulus considerations. However, he provides no details of where the supposed gap between the knowledge children display of the concept CARBURETTOR and the data the child receives when learning the concept lies. So until he spells out his poverty of stimulus argument in more detail his claims on these matters should be viewed with extreme scepticism.

More importantly, even if Chomsky did provide evidence that such concepts are innate, it would not follow that they will automatically yield analytic connections. Jerry Fodor has long agreed with Chomsky that most of our concepts are innate; however he does not think that analytic connections exist in natural language. It is worth briefly considering Fodor’s views on innateness and analyticity because doing so will help us better understand Chomsky’s views.

Fodor agrees with Chomsky that most of our concepts are innate. However he does not use the same arguments as Chomsky to reach this conclusion. While Chomsky’s poverty of stimulus argument points to a gap between what we know and the inability of the data of experience to account for our knowledge, Fodor argues that because of certain facts about the structure of concepts, all known theories about how concepts are learned are incorrect in principle.  He claims that there are two different ways of accounting for our knowledge of concepts. The first one is to say that most of our concepts are definitions which are defined in terms of primitive concepts. The primitive concepts are either defined as sensory primitives such as RED, SQUARE etc., or as abstract concepts such as CAUSATION, AGENCY, and EVENT etc. What the primitive concepts are will depend on the nature of the theory being expounded. So for example, the empiricist philosopher David Hume argues that the primitive concepts are sensory experiences. While the linguist Stephen Pinker argues that the primitive concepts are abstract concepts such as CAUSATION, AGENCY, and OBJECT which we are born with knowledge of.  Fodor thinks that Pinker, Hume and all definitional theorists are wrong because concepts cannot be explicated in terms of definitions.

The other theory of concept acquisition thinks that our concepts are basically prototypes which are learned statistically. Fodor argues against this view because he thinks that the prototype theory cannot account for the fact of compositional concepts. I will not here consider Fodor’s arguments against prototype theory because they are not relevant to the concerns of this thesis. Furthermore I will not consider the fact that Fodor has lately retracted his views on radical nativism because again this is not relevant to the overall argument of the thesis. I am only concerned with explicating why Fodor does not follow Chomsky in arguing that analytic connections follow from the fact of innate concepts. Fodor’s reason for not believing that analyticity follows from innate concepts is that he does not think that innate concepts have internal structure. His argument that concepts do not have internal structure is derived from his belief that concepts are not definitions. Fodor essentially uses three different arguments against the claim that concepts are definitions:

(1) Lack of definitions: In general we have not found any definitions for concepts. There are millions of concepts and very few definitions (maybe fifty or so). (1998, 46)

(2)Developmental argument: All concepts cannot be definitions.  Some of the concepts must be primitive. The concepts which are definitions must be defined partly in terms of the primitives. This being so we must assume that there is a developmental stage when children only know primitive concepts. However we have no evidence to support this claim. Therefore, concepts cannot be definitions. (ibid, 47)

(3) Production argument: If concepts were definitions, then sentences which contained complex concepts would be harder to produce than those which contained simpler concepts. The reason is that a complex concept would take longer to call to mind the definition of when speaking the sentence. However the experimental evidence indicates that the production of sentences involving complex concepts takes no longer than the production of sentences which contain less complex concepts. (ibid, 49).

Fodor believes that his arguments against definitions are decisive. He claims that without definitions, concepts have no internal structure, and furthermore if concepts have no internal structure, then there are no analytic sentences. Kant famously defined an analytic sentence as a sentence in which the predicate is entirely contained within the subject. However obviously if a concept has no internal structure, then nothing can be contained within it. For this reason, Fodor thinks that there are no analytic sentences. He notes that one of the main arguments for definitions is that they explained people’s felt intuitions of analyticity.  So this gives us two options: (1) accept that people’s intuitions of analyticity are a reason to believe in conceptual connectedness; (2) explain away the intuition. Since Fodor has offered reasons to deny that there are complex concepts, he opts for number (2), to explain away the intuitions of analyticity.

 

                  EXPLAINING AWAY ANALYTICITY INTUITIONS:

As we saw above Quine explained away our intuitions of analyticity in terms of his web of belief picture.  This worked for supposed non-trivial analytic sentences like F=MA. However it was less plausible as an explanation for apparently more trivial analytic sentences like ‘‘All bachelors are Unmarried men’’.  We cannot plausibly explain away cases like ‘‘All Bachelors are Unmarried men’’ by saying that they are deeply embedded into our total web of belief.

As we have already seen, Putnam claimed that Quine’s argument worked as a criticism of the a priori but not as a criticism of apparently trivial analyticity. He claimed that a sentence such as ‘‘All Bachelors are Unmarried men’’ is an analytic sentence, and its analyticity derives from the fact that it is a one criterion concept. The concept BACHELOR is not according to Putnam connected to any other concepts in our web of belief other than to the concepts UNMARRIED and MEN. Putnam argues that sentences which people intuitively find analytic but which are not deeply connected to our web of belief, can be explained by his notion of one-criterion concepts. Fodor disagrees with this explanation because he feels that it is a circular argument. In order to define which concepts are learned by one criterion, you need to know that it is analytic already. So your explanation of analyticity presupposes the existence of the very thing we are doubting. So we are back to Quine’s circle of terms again.

Fodor offers a different account of one-criterion concepts which he thinks is less problematic than Putnam’s is. Fodor agrees with Putnam that one-criterion concepts exist. He furthermore thinks that he can use these one-criterion concepts to explain people’s intuitions of analyticity. However, Fodor argues that since he is not using one-criterion concepts to explicate analyticity, then his story is not circular in the way that Putnam’s is. Here is Fodor’s story in a nutshell:

Suppose you think the only epistemic route from the concept C to the property it expresses depends on  drawing inferences that involve the Concept C*.Then you will find it intuitively plausible that the relation between C and C* is conceptual; specifically, that  you can’t have C unless you have C*. And the more you think that it is counterfactual supporting that the only epistemic route from C to the property it expresses depends on drawing inferences that involve the concept C*, the stronger your intuition that C and C* are conceptually connected will be. (Ibid, 83)

 

Fodor argues that people’s intuitions of analyticity are really just epistemic intuitions which are confused for semantic connections. So for example I can only know that someone is a bachelor by knowing that he is an unmarried man. According to Fodor, I know that all bachelors are unmarried men because of epistemic access to the facts in the world, not because of the internal structure of concepts. He holds that people’s intuitions of analyticity are illusions which result from conflating epistemic properties with semantic properties.

So we can see from Fodor’s argument that analyticity does not follow from innateness. If the innate concepts are unstructured, then they will not automatically yield analytic connections. Chomsky assumes that the children’s innate concepts are structured and as a result will automatically yield analytic connections. Furthermore unlike Fodor and Quine, he takes it for granted that people’s intuitions of analytic connections are good evidence that such connections exist.

In a personal communication I asked Chomsky why he believed that analyticity followed from innateness:

You’re quite right that it isn’t obvious, but I think it is correct.  It’s an empirical issue, in principle, but a hard one to investigate — and like other empirical questions, certainty is unattainable, just a high degree of plausibility.  Personally, I think that serious inquiry would reveal that “tomorrow is two days after yesterday” or “my uncle is male” or “nightmares are dreams” and much else would withstand the harshest tests, and I don’t think that Quine or Fodor have suggested any reason to doubt these expectations. (Chomsky: Personal Communication)

 

Chomsky’s reply is instructive because he simply claims that certain constructions are obviously analytic and that any future tests would confirm this belief. He does not answer the question about why analyticity follows from innateness. He merely claims that it is in principle an empirical question. Presumably his reasoning is that people do recognise a distinction between analytic and synthetic sentences, and he believes that people are born with a massive amount of innate concepts.  From this he believes it follows that the structure of these innate concepts are our best explanation for our intuitions of a distinction between analytic and synthetic sentences. However this vague argument relies on an unproven claim about innate concepts, and a further unproven claim about the structure of these purported innate concepts. So Chomsky thus far provides no real explanation for people’s felt intuitions of a distinction between analytic and synthetic sentences.

Chomsky appeals to the fact that certain sentences are obviously analytic, presumably he is here claiming that such sentences are intuitively felt to be analytic. However obviously the fact that people have intuitions of analyticity does not tell us whether Fodor’s story is correct or if Chomsky’s is.

Since Chomsky has offered no compelling evidence to support his claim that humans are born with innate concepts, a debate between him and Fodor on whether innate concepts have internal structure is a nonstarter. Chomsky has offered no evidence for innate concepts, so he cannot support his views on analyticity by appealing to innate internally structured concepts. Fodor has offered some arguments for innate concepts[2], however they offer little support to Chomsky’s position.

It is of course open to Chomsky to use Fodor’s arguments for innate concepts; however a vital part of Fodor’s argument, is that concepts are not definitions, so they have no internal structure. So if Chomsky did use Fodor’s arguments for innate concepts, he would be left in a situation where he could claim that concepts are innate but have no internal structure, so do not yield analytic connections. Either approach leads to the conclusion that analytic sentences do not exist. So Chomsky has offered us no real reason to believe that we can derive analyticity from innate concepts.

We have shown that analyticity does not follow from innate concepts. It is worth noting that the converse is also true. If one is denying that innate concepts exist one is not thereby denying that analytic connections exist in natural language. Empiricists such as Ayer, Carnap and Hume who are not committed to the view that concepts are innate, nonetheless argue that analytic connections exist in natural language. Ultimately, then, Chomsky’s arguments for innate concepts are irrelevant to facts about analyticity unless he can provide evidence that such innate concepts are structured in such a way as to yield analytic connections. Thus far Chomsky has provided no evidence about the structure of these supposedly innate concepts, so we therefore have no reason to believe that innate concepts lead to analytic connections in natural language.

Independent of his unproven claims about innate concepts yielding analytic connections, Chomsky’s arguments for analytic connections in natural language focus on three primary facts. One is that ordinary language speakers can readily tell whether a sentence of natural language is true by virtue of the meaning of the words involved and extra linguistic fact, or whether it is made true entirely by facts of meaning. His second reason is that he has never seen a convincing counter example to clear cases of analytic sentences, such as ‘Tomorrow is two days after Yesterday’. And his third reason is that he thinks that particular sentences of a relational structure such as the sentence ‘‘If John killed Bill then Bill is Dead’’ are more difficult to find counter examples to than the simple examples focused on in the philosophical tradition such as ‘‘All Bachelors are Unmarried men’’.

Chomsky’s first claim that people intuitively recognise a distinction between statements which are synonymous and those which are not has been tested experimentally through the using of questionnaires. The small literature does seem to indicate that people have an intuitive sense of a distinction which loosely corresponds to what is traditionally called an analytic/synthetic distinction. However such studies involve showing people different sentences which are categorised into two different groups. The fact that people categorise in a way consistent with an analytic/synthetic distinction proves nothing. Such categorisation may reflect an epistemic categorisation rather than a semantic one. The various different studies do not distinguish between the Quine/Fodor story and the Chomsky story. The Quinean model would predict that people would have an intuitive sense of a distinction between truths like ‘‘2+2=4’’ and ‘‘Most dogs are less than six feet tall’’. The difference between Quine and Chomsky on the question of Analyticity is whether; a person’s intuitive sense of Analyticity can be explained best in terms of an innate system of concepts, or in terms of deeply embedded beliefs in a person’s holistic theory of the world. The weak point of Quine’s views on analyticity is that his web of belief story works well for epistemologically significant concepts such as ‘‘2+2=4’’ but not for ‘‘All Bachelors are Unmarried men’’ which cannot be reasonably claimed to be deeply embedded into our total theory of the world. However Putnam’s one criterion concepts as modified by Fodor works perfectly to capture trivial cases of analyticity. Fodor has shown that such cases can be viewed as epistemic knowledge confused with conceptual knowledge. The fact that people have intuitions of analytic connections in natural language is not that significant, such intuitions could be epistemic knowledge confused with conceptual truths. They can be explained away as intuitions of truths deeply embedded in to our web of belief, or one-criterion concepts.

Chomsky’s second reason is not very convincing, as philosophers have shown over the last fifty years that some statements which seem immune to revision are in principle revisable; see for example Putnam on the sentence ‘‘All Cats are Animals’’. Now Chomsky could reply to this that while some sentences which are analytic have been shown to be in principle revisable, the paradigm statements have not been shown to be revisable. If he were to argue so, in order to make his claim a testable one, he would need a list of paradigm cases of analytic sentences. He would then need to show how such sentences could withstand the probing of the most strenuous of tests. Chomsky has never done anything like this. His remarks on analytic sentences, remain just that, remarks; and remarks which are unproven.

Chomsky has also claimed that philosophers erroneously focus on analytic sentences with to simple a structure; he claims that sentences of a relational structure provide much more evidence for analytic sentences. To test Chomsky’s claims we will need to consider some examples of sentences of a relational structure and test whether they support his claims over Quine’s. He has used two examples over and over again, they are: (1) If John killed Bill, then Bill is Dead. (2) If John persuaded Bill to go to college, then John caused Bill to intend to go to college. Here Chomsky discusses these sentences:

Notice again that we appear to have connections of meaning in such cases as these; we have a rather clear distinction between truths of meaning and truths of fact. Thus, if John persuaded Bill to go to college, then Bill at some point decided or intended to go to college and did so without duress; otherwise, John did not persuade Bill to go to College. Similarly if John killed Bill, then Bill is dead (though John may or may not be depending on the facts). These are truths of meaning not of fact. (2000b, 62)

 

Sentences1 and 2 are examples of sentences with a relational structure. The use of the predicates ‘killed’ and ‘persuade’ in the antecedents will have direct  bearing on what words are allowed in the consequent. Take, for example, Sentence 1 above- Let us consider the sentence as an argument schema: If X killed Y then Y is….?.  What Chomsky seems to want to say here is that the only way to make the argument schema true is to fill ‘dead’ into the blank or something synonymous with ‘dead’. Of course such an interpretation is patently false as can be seen by the following; we can insert ‘is not going to play football next week’, ‘is not going to run a marathon next week’ etc.  and the truth value will remain invariant. So it could be claimed that the relational structure does not in any obvious sense indicate that, from the fact of John killing Bill, we have to infer that Bill is dead. However, upon closer inspection it becomes clear that anything we fill into the blank must be consistent with the fact that Y is dead. So, for example, if one were to offer the following to fill in the blank ‘is going to sing his favourite song tomorrow’, one would be constructing a false argument. And the reason is that singing a song is not consistent with being dead. The two attempts to fill in the blank above which result in true arguments work because obviously not doing something, is largely consistent with being dead. From this we can conclude that any blank we fill into the consequent must at least be consistent with being dead, however obviously it does not have to be synonymous with being dead. So if we consider ‘X killed Y so Y is -’, only something consistent with being dead can be slotted into the consequent and the truth value remain invariant. So any conditional which has an antecedent such as ‘Y being killed, so Y is…’ must have a consequent which is consistent with Y being dead or the conditional will be false.

If one puts ‘is dead’ into the consequent one gets an analytically true statement. Likewise, if one puts ‘is not playing tennis’ one also gets an analytic sentence. This construal of the argument leaves us in a situation where we cannot equate the analyticity with synonymy. Obviously the statement that ‘X killed Y’ is not synonymous with the statement that ‘Y is not playing tennis’, despite the fact that Sentence 1‘X killed Y so Y is not playing tennis’ is an analytic statement. Chomsky could argue that Sentence 1 is analytic because of the internal structure of the complex concept KILLED. He could argue that the complex structure of such concepts which we are born with will automatically tell us what inferences can be derived from it purely based on the meaning of the concept[3]. So our language faculty determines what arguments can be filled into the schema ‘X killed Y so Y is-’ in order to yield analytic connections. Here we have an argument which seems to show how we can go from an innate concept such as KILLED to construct analytic sentences based on the internal structure of the concept. However, such an argument again relies on assumptions about innate concepts which have not been proven correct. So if Chomsky wants to construct such an argument he needs to find evidence to support his belief in innate concepts such as KILLED. Independent of the argument from innate concepts all that is left is the claim is that some analytic statements are difficult to refute, but of course this claim is largely consistent with Quine’s claims about analytic statements being deeply embedded into our overall theory of the world.

So Chomsky’s claim that sentences of a relational nature provide stronger kinds of evidence for analytic connections in natural language than the simpler cases that philosophers typically talk about has been shown to be erroneous. His assumption that this is so relies heavily on his belief in innate concepts. The claim that sentences of a relational structure seem to offer cases of analytic connections can be explained in terms of how embedded notions like KILL and INTENTION etc. are to our overall theory of the world.

The above discussion shows that Chomsky has offered no compelling arguments which demonstrate that Quine’s critique of an analytic/synthetic distinction is incorrect. It is possible in principle that Chomsky’s positive views on analyticity are correct; however, we do not as of yet have any empirical evidence to support Chomsky’s claims on this matter. In the next blog I will try to explicate what type of analyticity Chomsky is committed to if we assume he is correct that analytic sentences exist in natural language as a result of certain constraints which are imposed by the concepts which we are born with. In the next section I will assume that Chomsky is correct on this point, and consider how much analytic connections being encoded in the brain of a subject meets the criterion of analyticity as traditionally conceived.


[1] John Stuart Mill was an exception to this rule.

[2] I will not here evaluate whether Fodors arguments for innate concepts work because it has no relevence to the argument of the thesis.

[3] For a detailed discussion of issues connected to the above discussion see Fodor 1970 ‘Three Reasons for Not Deriving Kill from Cause to die’, and Pietroski’s ‘Small Verbs, Complex Events, Analyticity without Synonomy’. The debate between Pietroski and Fodor would take us too far a field from the concerns of this thesis but IS worth reading in its own right.

Lacan and Cognitive Science Part 1

                                 LACAN AND COGNITIVE SCIENCE

Psycho-analysis has undergone a quiet revolution as an attempt has been made to (1) Make its structure more explicit and formal, see for example, Matte Blanco and Wilfred Bion and more recently (2) attempts to merge psychoanalysis with neuroscience. Psychoanalysts like Mark Solms, and Joseph Dodds have led the charge in creating the new field of Neuropsychoanalysis. Some neuroscientists have joined Solms and Dodds by doing neuroscientific studies which aim to test psychoanalytic theories. So, for example, Heather Berlin, Eric Kandel, and Jaak Panksepp have done a lot of neuroscience studies which aims to test the claims of psychoanalysis. These studies confirm and disconfirm different aspects psychoanalytic theory. The Oedipus complex has received little support from neuroscience, while things like repression have received ample support in neuroscientific tests. There is still a long way to go in attempts to bring psychoanalysis into the scientific fold.

Despite my above optimistic assessment there is still a lot of work to be done. The formal models of psychoanalysis that I mentioned above are not justified with enough behavioural data. Psychoanalytic theory in general does not pay enough attention to studies of developmental psychology and behavioural theory. Part of a psychoanalysts training involves detailed observation of children and analytic interpretation of their behaviour. This aspect of their training is vital for a discipline which sometimes involves explaining adult neurosis in terms of childhood trauma. However a serious difficulty with this approach is that when one sticks to pure observation it is possible to interpret the same data by applying different theories to that data. To know whether ones theory is the correct theory of the data, as opposed to an ad-hoc story which just happens to fit the data we need experiments. Experiments serve the purpose of testing our theories to see whether they hold up to stringent tests. If psychoanalysts want their theories to be accurate they need to construct experimental tests of their own or at the very least use the experimental evidence which child psychologists like Pinker, Soja, Carey, and Spelke et al. have done.

The above difficulties in psychoanalysis are easily rectified and when this is done I believe this will greatly help psychoanalysis to become more scientific. There is however a different challenge which faces attempts to make psychoanalysis more scientific. This challenge is more difficult to overcome.  Contemporary psychoanalysis is divided into sects. We have schools where the teachings of Lacan are followed, pure Freudian Schools, followers of Anna Freud’s theories, followers of Melanie Klein’s theories etc. Neuropsychoanalytic theory typically focuses on orthodox Freudian theory which lends itself well to neuropsychoanalytic testing. However there is no reason in principle that the theories of Klein, and Anna Freud cannot be subject to similar testing.  In fact in a 2013 talk ‘Anxiety One or Two?’ Yoram Yovell provided some surprising neuroscientific evidence which supports Melanie Klein’s paranoid/schizoid theory.

Of the above theorists that I mentioned there is one whose theories are less easily integrated with modern scientific theory; Jacques Lacan. This is a serious problem. Klein, Sigmund Freud and Anna Freud all disagree with each other on various different topics because their theories are testable as we learn more we can discover which of them (if any) are correct on various different topics which they disagree on. Since Lacan’s theories are less easily studied biologically this means that it is harder to test them. Karl Popper may have been wrong about a lot of things but he was surely right that a theory which is totally untestable is unscientific. So it is important that if Lacan is to be taken seriously that we try to discover what aspects of his theories are scientific.

Lacan has had his work denounced by a variety of serious thinkers. The three quotes below represent some dismissals which have been given of Lacan’s work by Cognitive Scientists and Evolutionary Psychologists:

“Jacques Lacan I actually knew. I kind of liked him. We had meetings every once in a while but quite frankly I thought he was a total charlatan, just posturing before the television cameras the way many Paris intellectuals do. Why this is influential I haven’t the slightest idea I don’t see anything that should be influential (Chomsky: Veterans Unplugged Podcast Dec 2012)”

“… Although Lacan uses quite a few key words from the mathematical theory of compactness, he mixes them up arbitrarily and without the slightest regard for their meaning. His ‘definition’ of compactness is not just false: it is gibberish. (Sokal and Bricmont: Intellectual Impostures)”

“With Lacan, matters were altogether different. The question of evidence was not even raised by his followers. Everything the great master wrote was taken on trust, as if it were holy writ. Everything Lacan said was right, just because he said it. Debate in Lacanian seminars was purely a matter of exegesis – what did the master mean by such-and-such a phrase? Nobody ever took the next logical step and asked – was he right? That was simply assumed. (Dylan Evans: From Lacan to Darwin, p. 6)”

The above quotes are interesting but a couple of points need to be made. Chomsky is a brilliant linguist and political theorist; however he has never shown any sign of engaging with Lacan in any serious way. So despite Chomsky being a very highly regarded thinker his views on the topic should not be accorded any more significance than say Stephen Hawkins’s uneducated views on philosophy. Sokal and Bricmont made some serious criticisms of Lacan demonstrating that his use of maths was nonsensical. However Sokal and Bricmont were primarily considering Lacan from the point of view of critical theory. So their criticisms while interesting and to the point do not necessarily refute everything Lacan said about psychoanalysis. Not everybody would agree with me on this particular point, Richard Dawkins responds to replies such as mine as follows:

 “Perhaps he is genuine when he speaks of non-scientific subjects? But a philosopher who is caught equating the erectile organ to the square root of minus one has, for my money, blown his credentials when it comes to things that I don’t know anything about. (Richard Dawkins: Review of Intellectual Impostures p. 1)”

I must admit that I am very sympathetic to Dawkins point of view here, when theorists make such stupid claims they do lose a lot of credibility. However, I think that history does teach us that clever people can be brilliant at one topic and talk nonsense on other topics: consider how Francis Collins can be brilliant at Genetics and hold silly views on God, or how Newton was a brilliant Physicist and yet held silly views on Alchemy. So I think that Dawkins claim aside we should study whether Lacan’s views on psychoanalysis hold up to critical scrutiny. Dylan Evans who I quoted above is a different case than Dawkins, Sokal or Chomsky because he is trained as a Lacanian Psychoanalyst and is a scholar on Lacanian theory. Evans began life with a degree in Cognitive Science before training as a Lacanian Psychoanalyst and enrolling in a PhD programme studying Lacan. He wrote a number of books on Lacan including his famous ‘Dictionary of Lacanian Terms’. His books are still used as references in Lacanian Studies. However while practicing as a psychoanalyst Evans noted that using Lacanian theory was not helping his patients, for this reason stopped practicing as a psychoanalyst. He returned to university to study Lacan for his PhD before leaving in disgust when he discovered that Lacan’s theories did not hold up to critical scrutiny but his colleagues did not seem to care. So Evans left Buffalo University and went to The London School of Economics where he compared Lacanian Psychoanalysis with Evolutionary Psychology and came to believe that Evolutionary Psychology is the scientific way to study the mind while Lacanian Psychoanalysis is an unfalisifiable series of dead ends.

Evans claimed that not only was Lacan treated as a cult leader by his followers, Lacan in turn tried to save Freud from having his biological hypotheses refuted by translating them into a series of metaphors[1]. However Evans correctly noted that this approach in effect robbed Freud’s theories of any claim to scientific status:

“This strategy was doomed, however. It appeared to save Freud’s work from refutation by modern biology, but at the price of removing all empirical import. The biological Freud was wrong, but at least he advanced clear, testable claims. The cultural-linguistic Freud that Lacan invented, on the other hand, was completely untestable. He was not merely impervious to contradictory evidence in biology; he was impervious to any evidence at all. Lacan rescued Freud from a fatal encounter with modern biology by removing him from the world of science altogether. (ibid p.10)”

Imagine how unscientific Darwinian Theory would have been if a similar approach was adopted. So, for example, Charles Darwin had no workable mechanism to explain heredity. This doesn’t bother evolutionary theorists, Darwin was a clever human who lived at a particular era and simply hadn’t the tools to solve the particular problem (he wasn’t aware of Mendel’s work). But the aim of evolutionary theory is not to say that Darwin is correct on everything but to construct the best theory to explain the empirical data. Neuropsychoanalysts, like evolutionary theorists, are not interested in proving every claim made by a particular individual correct; rather they are interested in constructing a correct theory of the mind. Neuropsychoanalysts simply think that a lot of contemporary theories support Freud but they are modifying Freud’s theories as new data comes in. This is the scientific method, and if Lacanians cannot adhere to it they should not be licensed as practicing psychoanalysts.

It should be noted that the idea of a pure Lacanian Analyst does not capture what goes on in typical psychoanalytic sessions. Psychoanalysts are typically pragmatic in nature they use whatever tools they can to help their patient. An analyst will usually understand psychology; biology and psychoanalysis quiet well and will adapt and change their approach as they learn more and more about human nature through their daily analytic practice as they interact with their patient. Only an extremely bad analyst would stick rigidly to say Lacanian Theory in their practice. However since psychoanalysts are engaged in helping people in need we need to guard against worst case scenarios and think of analysts who damage their patients well-being by rigidly applying a false theory when interacting with their patients.

Lacan wrote an incredible amount of stuff on psychoanalysis, discussing a large variety of different things from Psychosis, to Neurosis, and Hysteria. He used examples from literature, and philosophy in his writing and his dense prose is nearly impenetrable so discussing his work in detail would take a multi-book series. It is a difficult task to analyse a theorists views in a blog. However this is the task I will undertake here. In this my first Blog I have set up the problem and shown how I aim to evaluate it. In the next blog I will outline Lacan’s theory and discuss what if any aspects of it can be made sense of empirically. I will discuss his use psychologist Henri Wallon (1931) work which showed that at 6 months humans and chimpanzees recognise themselves in the mirror, the infant becomes fascinated by his image while the chimpanzee quickly loses interest in his own image. I will show how Lacan used Wallon’s Data to construct his own concept of the Mirror Stage in the child’s development.  I will argue that Lacan use of the Mirror Stage is much too simplistic and needs to be modified in light of the mountains of empirical evidence discovered in developmental psychology over the last 80 years.

I will then discuss Lacan’s claim that the child’s self is constituted by being grafted onto the language of the other. I again show that developmental psychology strongly indicates that Lacan is incorrect in the way he conceives of this issue.

In my final Blog I will discuss how Lacan uses his three concepts of The Real, The Symbolic and The Imaginary to deal with neurosis, psychosis, and hysteria. I will discuss how his theories on these topics need to be modified in light of recent developmental and neuroscientific data. I will conclude my third blog by arguing that if Lacanian Psychoanalysis is to become a scientific treatment it needs to subject itself to falsification by the standard methods of science in the same way that Freudian Ideas are by its merger with Neuroscience.


[1] This tactic has been used by theologians. In the past the bible was treated as the literal word of God. However whenever science refutes an aspect of the bible it is then claimed that the bible should not be treated as literal. A similar approach has been used to interpreting Wittgenstein see Cray and Read eds ‘The New Wittgenstein’ where the early Wittgenstein is attempted to be rescued from refutation by similar verbal moves (Peter Hacker correctly pores cold water on this tactic in his chapter in the book.)

Theoretical Reasoning and Practical Behaviour

               THEORETICAL REASONING AND PRACTICAL BEHAVIOUR

In a recent discussion on the relationship between phenomenology and hetrophenomenology an interesting criticism was made by philosopher Jennifer Pepin which I think warrants a brief discussion. The criticism on the face of it seems so outlandish that it may seem appropriate to dismiss it with a smile. However I since I have seen examples of this sloppy thinking used by other philosophers I think it is important to clarify explicitly what is wrong with the criticism. The criticism was as follows: A person who is a Hetrophenomenologist upon seeing a wild wolf run aggressively towards him cannot judge that the animal is about to attack but must rather stand there and consult a scientist before acting.  Now as I have said this caricature is so silly it is likely to bring a smile to most reasonable people’s faces. It could be taken to indicate that the person using such silly caricatures is so desperate to avoid the standard methods of science being applied to the mind that they will cling to any absurd caricature that helps them keep the mind as the last sanctuary of magic and mysticism. I will argue below that this is not the case, rather Jennifer, thinks Dennett is guilty of similar caricatures so she is playing him at his own game.

In a different context Alva Noe made similar criticisms. He was commenting on a recent spate of articles by psychologists and neuroscientists on issues discussing evidence that animals are conscious. Noe noted that we should not be impressed by these articles. We already know that animals are conscious we don’t need science to prove this for us. He went onto note that people who need a scientist to convince themselves that animals are conscious have something seriously psychologically wrong with them.

Another example of this thinking is seen by Massimo Pigliucci when discussing Trolley Problems he noted an interesting neuroscientific study. In the study people were asked the standard ethical questions about whether you would push a fat man over a bridge to save the lives of two other people about to be hit by a train. When these questions were being asked people were having an F-Mri scan done on them. The interesting discovery was that people who typically gave the pure utilitarian answers had brain patterns consistent with being a psychopath. Psychologist Bernard Baars used this study as a stick to attack utilitarian philosophers with. In our Facebook group ‘Philosophy, Science and Utube’ he launched a savage personal attack on Peter Singer, claiming that he was a psychopath and that his ethical theories can be dismissed on these grounds.

I should be clear from the start that neither, Jennifer, Massimo, or Alva launched such a stinging attack on their opponents. But they did however all make the similar mistake of confusing a theoretical view point with a question of lived practical day to day affairs. All humans have certain folk intuitions which are vital to helping us mediate our lived world of experience. These folk intuitions are so deeply embedded into our thought processes that we just take them for granted and barely even notice them as we go about our daily affairs. Some of these folk theories are vital for our day to day survival and if our brains are damaged in certain ways this will result in our practical engagement in the world being severely impaired[1]. Our folk theories are not always correct, for example, humans are Intuitive Dualists (See Paul Bloom’s Descartes Baby), but it doesn’t follow that dualism is true. In fact we have overwhelming evidence to the contrary. Nonetheless a child who was not born with this intuitive dualism would suffer severe developmental delays; for example, Autistic children suffer developmental delays as partial result of not being intuitive dualists (Baron-Cohen 1995). So here we can see a clear case of where our folk-psychology which leads to false philosophical conclusions can be very important for a child’s development.

Now let us now consider Alva Noe’s discussion of animal consciousness. There is a sense where he is correct. If a person went through their lives thinking that all other animals were not conscious. This would at the very least indicate a lack of empathy. Such a lack of empathy may affect the person’s interactions and behaviour towards animals and their fellow man. However, the preceding considerations have little to do with theoretical considerations. A philosopher or scientist who is considering the evidence for x or y cannot merely uncritically rely on their intuitions as they investigate what the truth is on a particular subject they are studying. Rather they have to construct tests which will either support or refute their intuitions or conjectures. Philosophers like Dennett and Descartes argue that animals MAY not be conscious, they are presenting theoretical arguments that they think support their position. The same thing is true of philosophers like Donald Davidson who argue that non-linguistic animals MAY not have propositional attitudes, and of scientists like Marion-Stamp Dawkins who argues that animals MAY not have consciousness. There is no reason to think that a person’s theoretical views must override a person’s intuitions. So, for example, people who discover an evolutionary/neurological explanation for his love of their child don’t simply stop loving their child. Likewise, a scientist could argue on theoretical grounds that some animals are not conscious; but while he theoretically believes this proposition, his intuitive folk psychology could override this theoretical belief in the scientist’s day to day activities. So while Alva is correct that a person who instinctively did not experience animals as conscious and needed scientists to prove that they are would be extremely odd. None of this really shows that there is anything wrong with scientific studies into animal consciousness nor that there is anything wrong with the public’s fascination with the topic.

It should be noted that if Alva could show that people who hold certain theoretical positions on the topic of animal consciousness behave in a certain way towards animals in daily life then this view would be very interesting. Noe has provided no such evidence, however it is certainly worth looking into. However, prior to providing such evidence, Noe’s observation that people who needed a scientist to tell them an animal is conscious would have something wrong with them, is of absolutely no consequence to the study of animal cognition and consciousness.

Similar considerations apply to Massimo’s observation that people who go for utilitarian solutions to Trolley problems have brain patterns of psychopaths. This is undoubtedly an interesting discovery. And an ethicist who ignored this fact would be foolish. However, from this interesting discovery it is massive stretch to argue that utilitarians like Peter Singer are psychopaths. It is pretty obvious that a person’s theoretical position on a topic in a philosophy discussion isn’t always a reliable indicator of their behaviour in a day to day life. So, for example, there is no evidence that a deontologist will always behave on ethical matters according to a list of ethical rules that they have formulated. Likewise the same thing is true of utilitarians. Peter Singer does not always behave as his theory maintains that he should. Philosophers are human’s and don’t always live up to their theoretical ideals. Because Peter Singer believes that Utilitarianism is the best moral philosophy doesn’t make him a psychopath. Philosophers should have the space to consider the pros and cons of a particular ethical system without being attacked on such a personal level. If people like Baars really want to maintain their personal attacks on Singer at the very least they will need some behavioural evidence to support their views.

I started this blog by discussing philosopher Jennifer Pepin’s erroneous views on hetrophenomenology. I should note that she is guilty of two things. Conflating theoretical beliefs with practical behaviour, and misinterpreting what hetrophenomenology is in the first place. Her claim that a hetrophenomenologist upon seeing an angry wolf running at him will need to consult a scientist to find out how to behave implies that a hetrophenomenologist bans the use of the intentional stance to interpret the behaviour of another. Obviously this is not the case. When discussing the Hetrophenomenological method in his ‘Consciousness Explained’ Dennett noted the importance of adopting the following stance:

This sort of interpretation calls for us to adopt what I call the Intentional Stance (Dennett, 1971, 1978a, 1987a): we must treat the noise-emitter as an agent, indeed a rational agent, who harbours beliefs and desires and other mental states that exhibit intentionality or aboutness, and whose actions can be explained (or predicted) on the basis of the content of these states. Thus the uttered noises are to be interpreted as things the subjects wanted to say, of propositions they meant to assert, for instance for, various reasons.” (Dennett: Consciousness Explained, p, 76)

So anybody who has read Dennett’s discussion of Hetrophenomenology in ‘Consciousness Explained’ chapter 4 pp. 66-98 will know that Dennett wants us to use the intentional stance. So there is no reason from Dennett’s point of view that he would not endorse using the intentional stance to interpret the behaviour of an oncoming wolf.

On the other hand, let us assume that (falsely) that Dennett was some radical behaviourist like Skinner. Nothing really changes in our interpretation of the behaviour of a wolf. In practical situations we will do what is expedient, our theoretical views usually don’t interfere with our daily survival unless we have a lot of time to use them. Perhaps if Skinner had unlimited time he could develop a stimulus response theory which is more practical than a folk-psychology approach, it is however doubtful.

At this point one may be wondering why I am wasting my time on such a silly caricature. My reason is simple I think it is very important that we do not attack theoretical positions by wrongly assuming that adopting them will have disastrous  behavioural consequences. Unless Baars, Noe, or Pepin can demonstrate their case that these theoretical approaches will have such behavioural consequences then their critiques amount to nothing more than a caricature. When I criticised Jennifer for caricaturing the hetrophenomenologist position she replied that Dennett does this to phenomenologists all the time. There is a grain of truth to this. However in this case fighting fire with fire is counter-productive.

To see this lets consider a very silly caricature of phenomenology. Husserl recommends that when doing phenomenology we perform ‘The Epoche’ which is the bracketing of all judgements about the existence of the external world. I could without further ado argue that according to people like Husserl when a wolf runs towards us we need to consult a phenomenologist in order to discover if the wolf actually exists in the external world. This is obviously an absurd interpretation of Husserl’s position and one that could only be adopted if I had decided to ignore Husserl’s actual argument and instead just lampoon him by accusing him of holding silly beliefs. I suggest that either side of a theoretical position adopting this approach simply cuts off discussion. So it is seriously counterproductive if your aim is to get at the truth.

If we take Jenifer’s criticism of hetrophenomenology as sincere (as opposed to a silly caricature) then it is simply wrong on two counts (1) It does not accurately characterise Dennett’s position, (2) It confuses theoretical questions as to what methodology to use in experimental situations with empirical questions as to how to live in practical scenarios.

Before finishing I want to briefly note that I am not arguing that theoretical beliefs do not have practical consequences for how a person lives. Rather I argue that if a theorist wants to demonstrate this in a particular case the burden of proof is on them to provide a compelling case. For example, I believe that a person’s views on human nature will have serious consequences on their views on politics. So, for example, I believe that a person who accepts Rousseau’s view of human nature will hold a different politics than a person who accepts Hobbes’s views on human nature. However, if I want to make this case I need to do it in a serious way and engage with social psychology etc. Just asserting that I think it is true is not enough.

I have argued at length in another paper that Dennett’s theory of the mind is partly determined by his own type of consciousness. I had to work very hard to prove this case and I don’t think I have yet met the burden of proof. All I am asking is that theorists like Baars, Noe and Pepin meet the same burden of proof when the make claims about what people’s theoretical beliefs prove about behaviour and type of mind.


[1]  For an interesting discussion of such a case see Antonio Damasio’s discussion of  Phineas Gage in his excellent ‘Descartes Error’

The Churchland’s Consciousness and Naturalised Epistemology

Patricia and Paul Churchland’s most recent books ‘Touching a Nerve’ and ‘Plato’s Camera’ respectively are interesting attempts extend their ideas on how neuroscience can expand our understanding of traditional philosophical problems. Patricia’s book is written in a more confessional mode than Paul’s as she intermingles her scientific evidence with stories from her own life and how her science relates to her experiences. Also her book uses a lot more neuroscientific and psychological data than Paul’s does. She brings together interesting evidence on both the nature of morality and in particular on the nature of consciousness.

            Paul’s book on the other hand is more theoretical and aims to show how connectionist models and the more neurally realistic Hebbian learning can be used to solve traditional problems in epistemology. Paul’s book can be read as a neurally focused version of Quine’s attempt to Naturalize Epistemology. I have long called for our Naturalized Epistemology to be updated in light of new scientific evidence. Since Quine’s seminal paper ‘Epistemology Naturalized’ was written over fifty five years ago most of the philosophical attention has been on whether Quine’s conception of epistemology can actually deal with traditional epistemological concerns. So we have had challanges from people like Stroud that Quine’s project reduces to skepticism, and we have had claims that Quine’s Naturalized Epistemology cannot deal with Normative questions. This back and forth has been very instructive and important however I think that it some philosophers have become so obsessed with replying to non-naturalist critics that they haven’t developed an adequate Naturalized Epistemology.

            As an undergraduate philosophy student whenever the topic of Naturalized Epistemology came up Quine would be mentioned, nobody seemed to care that people like Chomsky had radically different views of how people went from stimulus to science. Few cared that Chomsky dismissed Quine’s epistemology as unscientific behaviourist nonsense. Likewise at the same time as Chomsky was busy criticising Quine, the developmentalist psychologist Piaget was sketching his own theory of Genetic Epistemology. While there were some conferences where people like Piaget, Fodor and Putnam thrashed these issues out, typically in philosophy circles when Naturalized Epistemology was mentioned it was Quine’s project which was thought of.

            In my own Phd I tried sort out the debate between Chomsky and Quine on the nature of epistemology and naturalism. I also tried to update the debate in light of current research in both cognitive science and developmental psychology. However I didn’t deal with neuroscientific data. In this sense I find Paul’s book a breath of fresh air. He brings the project of naturalising epistemology much closer to reality by trying to make it neurologically realistic. When philosophers are debating whether epistemology can be naturalized it is this Paul Churchland’s project not Quine’s (no disrespect to Quine intended I am merely acknowledging that the science has evolved since his time) that they should be engaging with.

            I think that both Patricia and Paul’s books should be read together, taken as a whole they offer a detailed introduction to Neurophilosophy and dispel some of the caricatures of the discipline that are offered in the literature. Nonetheless there are aspects of both books that I disagree with to varying degrees.

            I think that Patricia’s explanation of consciousness is excellent and one of the most compelling that I have heard from anyone on the topic. She notes the simple point that the most basic experience we have of being consciousness and not conscious is the difference between being asleep and being awake. So the study of  the activation of the brain during sleep will obviously be an invaluable tool to study consciousness. So we can study and contrast the brain states responsible for being awake, dreaming (REM and non REM), and being in a deep sleep. Sleep is evolutionarily ancient. Even fruit flies sleep, (see Robert Greenspan), also caffine keeps fruitflies awake and  they respond to anesthetics in the same way as humans do. (Touching a Nerve p. 227).Churchland notes that a good way to study sleep is by studying sleep disorders.

She begins by studying sleep walking disorders. She notes that when sleepwalking our perceptual systems are working well enough to be able to perceptually guide our way around the world, we can open doors, we can find our way to places we know (our friends house etc), but our perception is not normal. We may not see or recognise people on front of us, we may give no indication of hearing what somebody says to us. So while we do have some sort of perceptual abilities it is different from the perceptual abilities of our conscious selves. We also typically  do not remember our behaviour while sleep walking. We are also not aware of  the bizzareness of our behaviour (e.g. walking down the road naked), while such behaviour would be embarassing if done while conscious. It would be inappropriate to claim that people who sleep walk are unconscious, rather it makes more sense to say that they are experiencing an alternative form of consciousness. Studying the neruoscience of this alternative form of consciousness will help us to understand the nature of standard forms of consciousness.

            One hypothesis is that sleepwalking is a form of active dreaming in which the normal paralysis which occurs during dreams is switched off. Patricia Churchland denies that this is the case. To do this she discusses some contemporary neurological research into the nature of dreams. She discusses how REM does not occur during deep sleep. People who are woken up during REM typically report having dreams. However this fact should not be interpreted as indicating that dreams do not occur in non-REM states, or in deep sleep states. People who are woken up in deep sleep still sometimes report that they have had dreams. Though their dreams are not typically reported to be as visual during deep sleep.  She reports that brain scans of people sleep walking show that the brain activity which occurs during sleeping does not occur when sleepwalking. So she concludes that sleepwalking is not the same thing as dreaming. She also notes the strange fact that the brain activity which occurs during REM are closer to those a person has when awake than a person has when in a deep sleep. One difference between the brain activity of a person who is dreaming and the brain activity of a person who is awake is that the awake person’s prefrontal cortex is more active. She correctly notes that this brief discussion of consciousness shows us that a great way to understand the nature of consciousness is to study the neuroscientific difference between people who are awake and people who are unconscious i.e. in a deep sleep. 

She agrees with Dennett that there is no single location in the brain where consciousness occurs, and argues that there are particular structures in the brain (along with the looping links connecting those structures) which are necessary for the maintence of consciousness. She maintains that it is important to distinguish between two things:                                      

(1) The structures that support being conscious of ANYTHING AT ALL.

(2) The structures that support being conscious of THIS AND THAT (the contents of consciousness).

She discusses the work of Nicholas Schiff a neurologist who studies disorders of consciousness. His research has lead him to the central thalamus and its ingoing and outgoing pathways as giving us the capacity to respond to the world. She argues that there are looping neurons from these upper layers of cortex projecting  right back  to the ribbon  in the central thalamus. (ibid p. 235) The looping back allows for maintaining  an especially potent but transient  connection for a chunk of time.

She argues that some regions of the thalamus that connect to, for example, the visual area do so in a domain specific manner.  So the retina is connected LGN in the thalamus, however the LGN projects only to the visual cortex area V1-not to everywhere, not even to everywhere in the visual cortex (ibid p. 235). This is typical of other sensory modalities.  She calls this a system by system development of a specific signal. She notes that things are different in the central thalamus, the pattern of the central thalamus suggests a different set of functions: be awake and alert or down regulate and-doze. She makes the point that distinction of functions in the thalamus corresponds with the distinction between being aware of something in particular and being aware of anything.

She goes on to say that there are other aspects of the central thalamus which seperates it from other cortico-thalamic systems such as the style of the neuronal activity. The central thalamus has unique connectivity and unique behaviour. Think of this interms of the awake/sleep cycle. During awake and dreaming states the neurons fire in bursts at an unusually high rate (800 to a 1000 times per second). This bursting pattern is not displayed during dreamless sleep.  So this strongly indicates that the central thalamus plays a big role in making people conscious.

The ribbon of neurons that is the central thalamus is controlled by the brain cell and regulates the cortical neurons to ready themselves from consciousness. Put succintly BRAINSTEM + CENTRAL THALAMUS + CORTEX  is the support structure for consciousness.

She importantly describes what happens when the Central Thalamus is damaged:

(1)   If a lesion occurs on one side of the Central Thalamus people tend not to be conscious of the effected side. If both sides are effected then the person is in a coma.

To be aware of something, say a dog barking one needs to  have the brainstem, central thalamus, and upper layer of cortex in its on state. Central thalamic neurons must be firing in bursts that ride the lower frequency brain wave of 40hertz. In addition, the specific areas of the thalamus (for sound and sight, respectively) must be talking to the proprietary areas of the cortex.She asks how we manage to be conscious of a complex scene when a lot of the consciousness must be backed up by unconscious processes.  She relies on Bernard Baars Global Workspace Theory to explain these facts.

Baars argued that the important way to begin was by listing the significant psychological properties and capacities associated with being aware of SPECIFIC events.

(1)   Our conscious experiences are highly integrated and so rely on a plethora of background unconscious work. (Think of being conscious of what a person says to you; a massive amount of unconscious processing of syntax, phonology etc takes place behind the scenes to determine what you experience).

(2)   Your background information about say your dog are available to help you decide to  what to do in a novel situation. So your background information must interact with your perceptual experiences.

(3)   Consciousness has a limited capacity. You cannot follow two conversations at once.

(4)   Novelty in a situation calls for conscious attention.

(5)   Information that is conscious can be accesed by many other brain functions.

Churchland notes that these five proposals together provided an interesting framework to try and study consciousness empirically. This Global Workspace Theory to indicated the importance of a rich integration of information from different sources to yield conscious experience. The idea is that consciousness is a consumer of information from a variety of different sources. This leads us to another question: what is the difference in the brain for non-conscious processing and conscious processing?

One way to test for what is going on in the brain when a process is non-conscious instead of conscious, she notes is MASKING TESTS.If the word DOG is flashed to a screen followed by a delay of about 500milli seconds and then xxxx is flashed you will consciously see first DOG then xxxx. However if DOG is followed immediately by xxxx you will not see DOG only xxxx. Experimenters who want to get experimental traction on the difference between conscious and non-conscious processes scan the brains of people who have DOG masked and those who do not. What results have they found?

When the visual signal DOG is masked (not consciously perceived), only early visual areas (in the back of the brain) show activation. By contrast, when the visual signal  is consciously seen, the posterior activity has spread to more frontal regions, including pariential, temporal and prefrontal areas. Dehaene and Changeux refer to this as GLOBAL IGNITION.This GLOBAL IGNITION theory is of spreading activation from the posterior to anterior regions nicely confirms Baars GLOBAL WORKSPACE theory which states that conscious perception involves global connectivity in the brain.

Churchland speaks of RICH CLUB NEURONS as being central to making the wiring of the brain cost effective and ensuring that communication between the brain is efficient.  An important discovery of neuroscience was that the connectivity underlying global ignition was that all mammals have a SMALL WORLD organisation. NOT EVERY NEURON IS CONNECTED TO EVERY OTHER NEURON. However any given neuron is only a few connections away from other neurons. Some neurons are extremely well connected we call these rich club neurons.

A signal may follow this sequence: from a Locally Connected Neuron to a Feeder Neuron to a Rich Club Neuron to a different Feeder Neuron and then on to a new Local Neuron. A Small World Organization with hubs is more efficient than having every neuron connected to every other neuron.But how does the Rich Club framework help us connect with the Central Thalamus?Well consciousness of sensory signals involves linking spatially quiet seperate regions of the brain. As attention shifts, the prevailing linkage weakens and other neuronal pools now take their turn in making strong links. So the burst of spikes displayed by the central thalamus are responsible say catching the gist of a lecture while the local connectivity of neurons within groups provides the context of the gist. Studies of the effect of Anesthetics on people support the above model. Anesthetics work by messing up signals between neurons thus blocking global ignition.There are three properties that are very important in the neurobiology of consciousness.

(1)   Rich Club Neurons and their ability to make fast connections to other Rich Club Neurons, thereby providing a scaffolding for rich integration of information.

(2)   Global Ignition for brain events that reach consciousness.

(3)   The Central Thalamus with its role in enabling specific contents of awareness during the awake and dreaming states.

So that in nutshell is Patricia’s theory of consciousness. As  a summary of the state of play of scientific research into consciousness it is excellent. However from a philosophical point of view it leaves a lot of questions unanswered. Philosophers like, for example, Thomas Nagel, David Chalmers and even John Searle would agree with a the scientific evidence provided by Patricia but argue that it leaves the philosophical questions untouched.

            Nagel argues that consciousness emerges from brain processes but the subjective experience of consciousness cannot be reduced to the material brain. While John Searle has argued that consciousness is a biological phenomena, it is an emergent entity hence it is something different from the material basis that it emerged from. David Chalmers notes that explaining the neural correlates of consciousness is what he calls the “Easy Problem” of consciousness, and it leaves the hard problem (explaining how our actual subjective experience emerges these neural correlates) untouched. So the trio of philosophers I have mentioned could justly complain that Patricia is ignoring the philosophical problems that they are concerned with.

            In her ‘Touching a Nerve’ she does briefly criticize Chalmers’ views by arguing that his negative claims about whether consciousness can be explained by neuroscience lead to no real explanatory research programmes. His entirely negative claims are she argues are largely irrelevant to the ongoing scientific research. However it is imperative as philosophers operating within the space of reasons that we explore what the relevance that this scientific data has for philosophical views on the nature of consciousness.

            Patricia Churchland like Paul is famous for adopting the approach of eliminative materialism. However she has also said on many occasions that she is not an eliminativist about consciousness. Her eliminativism about folk theory is not as it is sometimes presented as a kind of a priori philosophical position which she adopts. She merely argues that just like some of our folk physics terms were eliminated as we learned more in theoretical physics, so some aspects of our folk psychology MAY have to be eliminated as we learn more about neuroscience. So her claim was an empirical one not some kind of a priori claim. When it comes to consciousness she does not expect a theory of consciousness to eliminate our ordinary conscious experiences, however she does expect that it may in some cases radically alter our intuitions about consciousness. In her 1998 paper ‘What Should We Expect From a Theory of Consciousness’ she correctly notes that reduction does not always mean elimination. So, for example, she notes that nobody thinks that light is not real as a result of Maxwell’s equations. She surmises that like Pain will not cease to be real as a result of future discoveries in neuroscience. She admits that in the past science has shown that, for example Caloric was not real. So science can show that certain theoretic terms pick out substances and properties which are not real, and she admits that future science may show that our folk-psychology of belief desires is a bad theory which does not pickout a real entity in the empirical world. Yet she claims that future science will not explain away pain. This is a very strange position.

            In one sense it is a position which a lot of philosophers and the man on the street would accept. The vast majority of people would agree that no matter what we learn about the neuroscience of pain it cannot take away from the basic fact that pain feels intrinsically awful. Based on this intepretation of Patricia it would appear that while she thinks that beliefs and caloric are theoretic entities which are subject to refutation by future science, our experience of pain is a basic fact not a theory and hence it cannot be refuted by future science.

            However the above intepretation of Patricia is difficult to maintain in light of other things she says about consciousness. She admits that our views about the nature of conscious experience have been modified as we have learned more about it through neuroscience. She cites discoveries such as Sperry’s discoveries about the nature of commissurotomy, research into sleep and dreaming by people like Hobson, aswell as the discovery of blindsight and argues that they have cast doubt on our intuitive conception of what consciousness amounts to. However she argues that while these experiments show the need to reconceptualise our basic theory of consciousness they do not eliminate our actual conscious experience (Churchland: What We Should Expect From a Theory of Consciousness).

            When she speaks of our actual conscious experiences which remain even after we have modified our theory of consciousness then she seems to being a realist about subjective experience in a similar manner to Thomas Nagel. Her view is uncongenial to the claims that Dennett makes in his ‘Consciousness Explained’. Now obviously there is nothing wrong with siding with Nagel against Dennett but it is surprising given her naturalist leanings. I think it is worth exploring this issue in more detail as it is a very important philosophical issue.

            Nagel argues that there are facts of the matter about subjective experience even though we may not ever be in a position to know precisely what these facts of the matter are. Furthermore he thinks that there are intrinsic facts about our subjective experiences independently of our theorising about these experiences. Patricia Churchland who holds that our consciousness will not be explained away even if our conceptions of it are seems to have landed in a similar position as Nagel. She seems to hold that there is something given in our experiences independent of our theorising about them. 

Chomsky and The Science Forming Faculty

   CHOMSKY AND THE SCIENCE FORMING FACULTY

Chomsky argues that our ability to construct scientific theories is made possible by an innate domain specific faculty, he calls it our Science-Forming Faculty. It is worth pausing to consider this faculty and its nature.  The Science-Forming Faculty[1] is connected with the fact that our ordinary theories of the world are underdetermined. If one were to take any theory of the world it would be supported by a certain amount of data.  Chomsky’s point is that the data does not just imply one theory; rather there are countless theories which are compatible with the data but which are incompatible with each other. Such underdetermination has the air of a truism about it. Chomsky claims that humans are innately constrained in the types of theories they find plausible and therefore accept. He puts it as follows:

Because we share across the species a kind of science-forming faculty that limits us, if you will, but at the same time provides the possibilities of creating explanatory theories that extend far beyond the evidence that is available…it is worth paying attention to what the scientist is doing when a new theory is created…First of all the scientist has very limited evidence. The theory goes far beyond the evidence. Secondly, much of the evidence that is available is disregarded; that is in the hope that someone else will take care of it. At every stage in the history of science even normal science, there is a high degree of idealisation, selection of evidence even distortion of evidence (Magee 1978 p.187).

 

 In other words the SFF constrains the types of theories which humans can formulate.  At this point, an obvious question arises: Is Chomsky correct about us having a SFF? Does this faculty help us construct theories which in fact correspond with how things are in the world independent of us?

Chomsky’s answer to the above question is as follows:

The successful natural sciences, then, fall within the intersection of the scope of the SFF and the nature of the world. They treat the (scattered and limited) aspects of the world that we can grasp and comprehend by naturalistic inquiry, in principle. The intersection is a chance product of human nature (200b, 82).

 

So Chomsky is here claiming that it is just a chance product of how our minds are constituted that we can form accurate theories about some aspects of the objective world. While our SFF may help us construct true theories about some aspects of the world, its very scope and limits determine that there are some areas of the world that we cannot comprehend.

Chomsky holds that the SFF is deeply connected to the Problems/Mysteries distinction. He takes the fact that humans are epistemically bounded to be a truism that any naturalistic philosopher will accept, ‘…We are after all, biological organisms, not angels…’ (ibid., 83-84). Creatures created by evolution are bound to be contingent and fallible, having particular cognitive apparatus which works brilliantly in some domains but totally fails in other areas. Rats are a perfect example: there are some mazes which they simply cannot be taught to solve. Chomsky notes this point claiming: ‘There is no more reason to suppose humans to be capable of solving every problem they can formulate than to expect rats to be able to solve any maze’(1986b, 231). In another discussion of the SFF he claims, ‘The scope and limits are relative to humans; rats and Martians have different problems and mysteries’ (2000b, 83-84).

Chomsky claims that certain questions such as freewill, why a person makes one choice rather than another, are a mystery, whereas he argues other questions are mere problems, such as the questions of physics and questions about the syntax of natural language. He furthermore claims that we do not have to consider such questions as meaningless; rather we can consider them legitimate questions that the structure of our brains will not allow us to answer. In his Basque lectures he commented:

You can give a naturalist interpretation of such matters, and maybe there is a right question and we just cannot formulate it, because we are just not built that way. So if there is one we may not find it (2009, 47).

Chomsky’s discussion of our SFF commits him to a view of reality which is in some respects Kantian. Both thinkers believe that how we view the world is governed by our innate faculties of the mind. For both, reality conforms to our mode of cognition[2].Furthermore, like Kant, Chomsky believes that there is a ‘thing in itself’ which extends beyond what we can know. Clarification of Chomsky’s position is obviously needed. He believes that whatever we know about the world of physics is known because of the innate structure of our minds.  He further believes that there are some facts about the world that we cannot know because of the way our minds are structured. The limit on possible hypotheses which we can make is, Chomsky admits, similar to what Peirce called the principle of abduction. However, unlike Peirce, he does not think that there is anything about the theory of evolution which will guarantee that our SFF will allow us to answer all the questions we want to answer (2000b, 83).

Chomsky is committed to the view that there is a determinate world which exists independent of our cognition. He also believes that what we know can be known only insofar as it conforms to our mode of cognition, so the real world cannot be exhaustively known by humans. He captured this point in his Fara lectures when he used the metaphor of a scientist being like a person looking for his keys at night centring his search around a lamp post; the person is looking for his keys there not because he has lost them beside the lamp, but rather because this is the only place which is lit up. For Chomsky, the scientist achieves success by limiting his researches to the small areas where he can construct intelligible theories. Furthermore, he counsels that we should the treat these scientific theories as being more real than the ordinary world of empirical experience.

Ultimately, Chomsky can be construed as believing that humans are born with an innate SFF, which limits and makes possible the type of scientific theories which we can accept. These theories accurately capture certain aspects of the real world, but are ill equipped to deal with certain problems; hence part of the world will permanently remain hidden from us in darkness. Chomsky’s view on science can be captured with an image which Locke employed in his Essay Concerning Human Understanding. This picture compared a scientist to a person who is positioned within a closet with some light shining in the doors towards him. While the light gives him some understanding of the real world, it is only a fragmentary limited grasp of the world[3]. In other words, the world is partially opaque to us because of the structure of our minds.

 THE EVIDENCE FOR THE FACULTY

We saw in the last section that Chomsky compares the faculty of language with the SFF; he does, however, note a distinction between them. He spells out this difference as follows:

In some domains-acquisition of language, object perception, etc.-the growth of knowledge just happens to us, in effect. The mental faculty grows from its initial to its steady state without choice, though not necessarily without effort of willed action. In other domains- the natural sciences, for example- the growth of knowledge involves deliberate inquiry involving hypothesis formation and conformation; guided no doubt by ‘abductive’ constraints on potential hypotheses as well as other equally obscure factors that enter into choice of idealization and the like. The basic elements of rational inquiry may have some of the properties of such cognitive systems as the language faculty, though the ways in which they are employed is surely quite different: Scientific Knowledge does not grow in the mind of someone placed in the appropriate environment (1980a, 140).

 

This claim of Chomsky’s obviously needs clarification. He is making two key points: (1) the SFF does not just grow when a human is placed in an appropriate environment, and (2) the SFF uses a ‘generate and test’ model to construct its theories as well as certain abductive principles which Chomsky does not spell out. Since a generate and test model is easily captured by an empiricist model of the mind, he must think that it is the abductive aspect of the SFF which distinguishes it from other theories of how we develop our scientific abilities. This leads to the question of what the nature of this abductive principle is.

Chomsky has rarely explicitly discussed the exact nature of this abductive principle. However, when discussing the problems and mysteries distinction, he vaguely points towards the type of considerations which our abductive principle forces on us. Here are his reflections on the matter:

Equipped with SFF, people confront ‘problem situations’, consisting of certain cognitive states (of belief, understanding, or misunderstanding), questions are posed, and so on. Often SFF yields only a blank stare. Sometimes it provides ideas about how the questions might be answered or reformulated, or the cognitive state modified, ideas that can then be evaluated in ways that SFF offers (empirical test, consistency with other parts of science, criteria of intelligibility and elegance, etc.) (2000b, 83)

 

What is central here is the manner he claims the SFF evaluates questions. According to Chomsky, the SFF uses empirical tests, consistency with other parts of science, and a criterion of intelligibility and elegance. He adds an etc. at the end of the paragraph which implies that the SFF uses other criteria; however, we have no way of knowing what they are until he explicates them. Empirical testing is presumably what Chomsky refers to as trial and error, so this is something which can be accounted for by general intelligence. There is no reason to believe that consistency with other parts of science cannot be achieved by an inductive reasoner who has internalised a bit of logic. Our criterion of intelligibility and elegance is something which influences our thinking in aesthetics, as well as our reasoning in politics, etc. There is no reason to believe that our criterion of intelligibility and elegance is limited to a particular faculty of the mind.

The evidence that Chomsky has offered in favour of the SFF is so far extremely limited. He has pointed out that scientific theories are underdetermined by experience. This UD is basically an APS. However, it is meaningless unless he can provide an account of how much data is enough data for a scientist to learn from. Until such time that Chomsky provides such a theory, for him to say that such and such scientific theory cannot be learned from experience is meaningless.

Furthermore, Chomsky has never been able to provide a detailed theory of what the nature of the SFF is, and the few details he has provided do not distinguish his views from those of an empiricist. So in this sense he has not provided enough evidence to justify his postulation of a SFF. However there is a different strand of argumentation which Chomsky sometimes uses to support his claim that people have a SFF.  He points to the timing of new scientific discoveries and their reception by the public. Obviously people do not develop scientific theories at a fixed age. Chomsky clearly knows this; he is not claiming something absurd like children automatically acquire Newtonian Physics by the age of 5, and Einsteinian Physics by the age of 7. However, he does make claims about the fact that people hit on new scientific theories at the same time, and once these scientific theories are discovered people accept them as obviously true. In this context he is talking about historical time not developmental time. Chomsky claims that Peirce first recognised these points and in Of Minds and Language he cites approvingly Peirce’s conception of the matter:

Peirce’s sense was very straightforward and, I think, basically correct. He says you want to account for the fact that science does develop, and that people do hit upon theories which sort of seem to be true. He was also struck by the fact, and this is correct, that at a certain stage of science, a certain stage of understanding, everybody tends to come to the same theory, and if one person happens to come to it first, everybody says ‘Yes that’s right’ (2009, 3-4)

 

Here Chomsky is picking out what seems to be a real phenomenon. It does appear that for at least some scientific discoveries two great thinkers may hit upon the same idea at the same time, even though a generation beforehand nobody would have thought of the idea. Some examples from intellectual history are: Leibniz and Newton co-discovering infinitesimal calculus, Darwin and Wallace co-discovering the theory of evolution, and Frege and Russell co-discovering predicate calculus. However, it is hard to see how such facts can be used as evidence for a SFF. A more natural interpretation is that people like Darwin and Wallace hit upon the same discovery because they were hard working scientists who were able to build on the work of their predecessors. However, the question of how many new discoveries are made possible by prior theories is a question for biographers of scientists and historians of science to answer. Chomsky will need to engage with such thinkers if his speculations are to be given any substance.

Chomsky not only appeals to the fact that great scientific thinkers hit upon the same ideas at the same time to support his theory, he also claims that when such discoveries are made people recognise them as obviously true.  Here he is presumably claiming that people find such discoveries obviously true because they interpret them using their own SFF. Now, while I do not want to engage in amateur intellectual history, the historical record strongly suggests that Chomsky is mistaken on this point. Darwin’s theory of evolution was not greeted by the scientific community as being obviously true. In fact the theory sparked massive debates and controversies. Newton’s infinitesimal calculus was severely criticised by George Berkeley amongst others. In fact most of the great scientific discoveries have been greeted with severe resistance; they have not been just accepted as obviously true upon discovery.  This leads us to the question of how Chomsky’s speculations on the SFF can handle the people who did not find Darwin’s theory of evolution obviously true. Did they have a malfunctioning SFF? Did they have no SFF? Or was the faculty overruled by a stronger faculty in the mind? Chomsky does not even consider such questions. It is obvious that if he wants to construct an accurate theory of a SFF he will need to mesh this theory with actual facts from the history of science and sociological studies of people’s reactions to new scientific discoveries. As in the area of language production, when it comes to scientific discoveries and their reception, Chomsky will need to sketch an accurate description of performance in order to justify any claims he makes about competence. In the absence of such a description Chomsky’s claims remain nothing but idle speculation.

Questions about the history of science raise another interesting point. Most historians believe that science was developed in the seventeenth century. When Chomsky discusses the SFF he usually does it in relation to modern science. John Collins in his paper ‘‘On the Very Idea of a Science Forming Faculty’’ claimed that we could view Chomsky’s SFF in two different ways. The first way he calls the narrow interpretation; on this interpretation the faculty is responsible for our grasp of science in the modern sense. The broad interpretation claims that our faculty is responsible for our ability to construct rational theories to account for our experiences, so this faculty would account for far more than just modern science.

The narrow interpretation is the one which Chomsky seems to intend us to use. All of the examples which he uses pertain to seventeenth century science. However, if this is the account that Chomsky has in mind, then it is seriously in error. Chomsky concedes that his SFF is not on a par with the language faculty in terms of evidence in favour of it. While UD is used as an argument for both faculties, there is much more evidence which can be adduced in favour of a language faculty than for a SFF. Collins notes that there are four tests which can be used to argue that a particular ability is made possible by a distinct faculty: (1) a faculty-based competence must be uniform across the species, it cannot be a culturally specific capacity; (2) the competence must follow a strict ontogenetic course, explicit teaching must not make a significant difference to the speed of the development of a final competence arrived at; (3) the competence must, to some degree, be invariant over various pathologies, injuries and differences of intelligence, so that a disturbance of a face recognition faculty should not necessarily lead to a disturbance of the language faculty; (4) the competence should reach normal maturity in the face of a poverty of stimulus (2002, 135).

Whether we interpret the SFF in the broad or narrow sense will influence whether we think that the SFF meets conditions 1-4. It is plausible to say that all cultures have science in the broad sense of constructing theories to explain, predict and control their experiences. However, science in the narrow sense is definitely culture specific, and did not even exist five hundred years ago. So, taken in the broad sense, we could say that science is species specific and does occur in all cultures. However, science in the narrow sense obviously does not occur in all cultures. In both the broad and narrow interpretations obviously scientific competence does not seem to follow a strict ontogenetic course and teaching does make a difference to it. In both interpretations the competence is not invariant over pathologies.  Furthermore, understanding whether the competence arrives in the face of APS is impossible until Chomsky specifies how much stimulus we need in order to learn[4] science in its broad or narrow sense. Until we have such a theory, it is impossible to answer the question or understand whether the stimulus is indeed impoverished.

So, as of yet, Chomsky has not provided us with any good reason to postulate a SFF in either the broad or the narrow sense.  He has given us no reason to think that such a faculty is needed to account for our scientific knowledge. His primary reason for postulating the faculty is because he thinks that the faculty can help us overcome UD.

In the next blog I will show that Quine linguistic conception of science is also incapable of accounting for our scientific competence. In the final blog I will show what how Paul Churchland’s model in Plato’s Camera in terms of non-linguistic maps can deal with problems which elude the Quinean and Chomskian models.


[1] Hence forth Science Forming Faculty will be referred to as SFF

[2] There are obvious disanalogies with both thinkers. Most importantly Kant thinks that we can only know the world insofar as it conforms to our mode of cognition. He further thinks that we can never know the real world  whereas Chomsky thinks that our mode of cognition means that we can only partially grasp certain aspects of the world while others will forever elude our grasp.

[3] While this Lockean image is useful to illustrate Chomsky’s picture of the minds relation to the world it would be a mistake to otherwise equate Chomsky and Locke’s theory of knowledge.

[4] Sketching a PoS for scientific knowledge seems impossible; it  will need to give a plausible story of how Newton or Einstein learned or created their respective theories. It is wildly implausible that they just read them off experience, while it is just as implausible that they knew them innately. Given the difficulty we have in explaining the behaviour of animals and ordinary humans explaining how creative geniuses construct their theories seems impossible.

Language, Science and Reference

      CHOMSKY AND QUINE ON THE NATURE OF REFERENCE

…Suppose Peter says that Joe Sixpack voted for a living wage because he’s worried about his childs health. Are we entitled to conclude that Peter believed the world to be constituted of such entities as Joe Sixpack, living wages, and health, and relations like voting-for and worrying-about that hold among them? Would the parallel inference be legitimate when Peter says that Tom visited Boston? If Peter says that the bank moved across the street after it was destroyed by fire, does he believe that among the things in the world there are some that can be destroyed but still be around, so that they can move? (Chomsky: New Horizons in The Study of Language and Mind)

As the above quote shows Chomsky argues that if we use a term in language it does not follow automatically that the term commits us ontologically. So if we take terms  like: flaw, the average man, unicorn, it does not follow that because these terms exist in our language that they refer to mind independent entities.  He correctly stresses that we should not read off our ontology from our ordinary way of speaking. In another context he mentions how our ordinary concepts reflect intricate and surprising constraints on how we can interpret them, he also believes that these constraints are best explained interms of innate concepts. These constraints will to a certain extent determine how we use our concepts to refer.  Chomsky labels these constraints an I-variant of Frege’s telescope, implying that it is through the lense of these constraints that we can refer to entities in the world. So, for example, if we take the word ‘London’, this word has various different properties some of which are contradictory:

We can regard London with or without regard to its population: from one point of view, it is the same city if its people desert it; from another, we can say that London came to have a harsher feel to it through the Thatcher years, a comment on how people act and live. Referring to London, we can be talking about a location or area, people who sometimes live there, the air above it (but not too high), buildings, institutions, etc., in various combinations  (as in London is so unhappy, ugly and polluted that it should be destroyed and rebuilt 100 miles away, still being the same city). Such terms as London  are used to talk about the actual world, but neither are or are believed to be things-in-the world with the properties of  the intricate modes of reference that a city name encapsulates. (NHLM, p. 37)

So terms such as ‘London’ are used to talk about the actual world but, according to Chomsky, people do not think that there are things in the world with the properties of the intricate modes of reference that a city name encapsulates.  Chomsky’s discussion of the vagaries of reference here will remind people of Quine’s similar discussion in chapter 4 of Word and Object:

Insofar as it is left unsettled how far down the spectrum toward yellow or up toward blue a thing can be and still count as green, ‘green’ is vague. Insofar as it is left unsettled where to withhold ‘muddy water’ in favour of ‘wet mud’, ‘water’ and ‘mud’ are vague. In so far as it is left unsettled how far from the summit of Mount Rainier one can be and still count as on Mount Rainier, ‘Mount Rainier’ is vague. (Quine: Word and Object, p. 126)

In the above section Quine is discussing the vagueness of terms and of how the extension of the terms are unclear. Here Quine is reminding us that while some sentences may appear to be purely referential one of the terms in the sentence does not actually refer at all and is in fact a purely relational term. So clearly Quine and Chomsky are making similar points about language. The points being that ordinary language is shot through with vagueness and ambiguity and that while some terms may appear to be referential when analysed it turns out that they are not referential at all.

Quine emphasises that the vagueness of term is a natural consequence of word learning. So, for example, he claims that the indefinite objects of a vague term are the terms which bear only a slight similarity to the objects for which we have been rewarded for verbally responding to in the past.  So if we think of a child learning a word by induction through observing societys usage of the term, the vague cases will be the cases for which induction is most inconclusive because of a lack of evidence.  Quine argues that the evidence will not be there to be gathered because of society’s members having themselves had to accept similarly fuzzy edges when they were learning.

Vague words can he claims be either singular or general.  He gives the following example of vague terms:

Insofar as it is left unsettled how far down the spectrum toward yellow or up toward blue a thing can be and still count as green, ‘green’ is vague. Insofar as it is left unsettled where to withhold ‘muddy water’ in favour of ‘wet mud’, ‘water’ and ‘mud’ are vague. In so far as it is left unsettled how far from the summit of Mount Rainier one can be and still count as on Mount Rainier, ‘Mount Rainier’ is vague. (Word and Object, p.126)

According to Quine a general term true of objects can be vague in two different ways. Firstly because of the several boundaries of the objects, and secondly because of the ambiguity as to whether to include or exclude marginal objects.

So, for example, if we take a term like ‘mountain’ it is vague as to how much terrain we can count as a mountain, where exactly the cut off point is? It is also vague as to what lesser entities we can even count as mountains at all.

Ambiguous terms differ from Vague terms. A Vague term is a term which we are unsure how to apply to marginal objects, whereas an ambigous term may variously true of certain objects, and false of them aswell. Quine, gives the example of the term ‘light’,  this, he argues, will be true of ‘dark feathers’ but false of them in another sense. In the first sense we will take ‘light’ as ‘light as a feather’, in the second we will take ‘light’ as ‘a beam of light’. Quine claims that a partial explanation of ambiguity can be achieved by considering the linguistic term homonyms.  His gloss on homonyms is as follows:

Lexicographers and grammarians have long permitted themselves to treat words otherwise than as linguistic forms, by declaring of a form that it functions sometimes as one word and sometimes as another. (ibid, p. 129)

This of course leaves us with another question such as when do we have two homonyms as opposed to one ambiguous word? According to Quine lexicographers suit their convenience when trying to answer such questions. So if we take the example of ‘bore’, in the sense of ‘she bore her child’, and in the different sense of ‘she is a bore’ the lexicogropher will treat ‘bore’ in these sentences as homonyms because of their different etymology, and different grammatical functions.

Quine claims that we deliberately create ambiguity when we name somebody. He uses the example of naming somebody ‘Paul’, this name is not a general term true of many people it is rather a singular term with wide ambiguity. The evidence for this claim is based on a grammatical fact about general terms. A general term like ‘Dog’, permits the following grammatical constructions and it is the possibility of such constructions that marks out a term as a general term: a dog, the dog, dogs etc. Obviously in the case of ‘Paul’ we are not permitted to use such constructions as ‘A Paul’, ‘The Paul’, ‘Pauls’ etc.  The example of ‘Paul’ distinguishes between general terms and ambiguous singular terms in terms of the grammar of english.

Now though Quine asks us to consider a general term; and he asks us how we can decide how much of the terms multiple applicability is due to ambiguity and how much is due to generality? To decide between the two alternatives Quine gives some concrete examples of terms with multiple applications such the following:

[1] That is a hard question.

[2] That is a hard table.

[3] The proposition ‘either it is going to rain or it is not going to rain’ is true.

[4] The proposition ‘it will rain tomorrow’ is true.

[5] Material objects exist.

[6] Numbers exist.

Some philosophers claim that the above terms (in italics) are different uses of the same ambiguous terms. Quine disagrees and claims rather that they are merely very general terms:

Why  not view ‘true’ as unambiguous but very general, and recognize the difference between true logical laws and true confessions as a difference merely between logical laws and confessions?

Ultimately Quine suggests that we treat terms as being ambiguous when they can be clearly true and false of one and the same thing from utterance to utterance.  However he reminds us that while this may be our best way of discovering what terms are ambiguous it certainly does not follow that terms which vary in truth value can all be explained in terms of ambiguity. He gives the example of the statement ‘The door is open’ which changes its truth value depending on the movement of the door but which is not an ambiguous statement.

A further point which Quine makes is that ambiguity can have implications for composite words in the sense that such words can be interpreted in the attributive sense and the syncategorematic sense. Quine explains this distinction in the following manner:

One way it enters is through indeterminacy between the truly attributive and the syncategorematic use of certain adjectives. Thus consider the rich little word ‘poor’  When it is ostensibly in attributive position it may either have truly attributive use, in which case it may either impute poverty or express pity, or it may be syncategorematic, suggesting ‘badly’. If in  ‘poor violinist’ we take the use of poor as truly attributive, then poor violinsts are poor (or perhaps pitiable) and they are violinists; if we take it in the syncategorematic way,  then poor violinsts need to be neither poor nor pitiable nor even, by descent standards, violinists. (ibid p.132)

Quine further clarifys his views on this topic by explaining that when it comes to ambiguity of syncategorematic compounds that the ambigious term is the compound not the adjective; because an adjective in a syncategorematic use is not used as a term. Quine offers a good example of this type of sentence: ‘I passed nobody on the street’ where nobody is clearly not being used as a term.

Like Quine Chomsky also discusses homonymy:

Consider the word ‘Bank’  (savings, river). We can say that:

(1) The bank burned down and then it moved across the street;

(2) The bank, which had raised the interest rate, was destroyed by fire; and

(3) The bank lowered the interest rate to keep from being blown up.

Referential dependence is preserved across the abstract concrete divide. Thus  (1) means that the building burned down and then the institution moved; similarly (2) and (3). But we cannot say that:

(4) The bank burned down and then it eroded; or

(5) The bank, which had raised the interest rate, was eroding fast; or

(6) The bank raised the interest rate without eroding.

Sentence (4) does not mean that the savings bank burned down and then the river bank eroded.

The facts are often clear, but not trivial. Thus, referentially dependent elements, even the most narrowly constrained, observe some distinctions but ignore others, pronouns, relatives, the ‘empty category’ that is the subject of ‘being blown up’ and eroding’. In the case of ‘bank’, the natural conclusion is that there are two Lis that happen to share the same I-sound (homonymy),  and that one of them,’savings bank’ is polysemous, like ‘book’: it provides a way of looking at the world that combines abstract and concrete properties, allowing referential dependence across these perspectives. (Chomsky, NHLM p. 180)

Chomsky is not here concerned with how to distinguish between an ambiguous general term and homonyms, his concern is with showing the complexity of our referential apparatus and how subject it is to internal constraints.

So he asks us to think of the linguistic form ‘Bank’ this written inscription (or spoken sound) is used to label two different concepts: (1) Bank as in river bank, (2) Bank as in financial institution. The key point to note in the above 6 examples is the referential dependence of the pronoun it on Bank and how this dependence changes depending on how we interpret the term ‘bank’. Thus in consider 1 in relation to 4:

(1) The bank burned down and then it moved across the street.

(4) The bank burned down and then it eroded.

A number of points should be obvious from the above sentences; firstly in 1 ‘Bank’ and ‘it’ are referentially dependent on each other. ‘Bank’ in 1 refers to a concrete object i.e a particular building, whereas ‘it’  refers to an abstract entity i.e. a financial institute, nonetheless despite this fact ‘Bank’ and ‘it’ are referentially dependent on each other in sentence 1. Now let us consider sentence (4)  we cannot treat ‘bank’ as in financial institution in the first part of the sentence and treat ‘it’ as refering to river bank in the second part. It is a fact about referential dependence that such a move is blocked.

While both thinkers agree about the ambiguity of language they use this point for different purposes. Quine who thinks that our scientific theories are a web of sentences which face the tribunal of experience as a corporate whole tries to tidy up our language by translating it into the syntax of quantification. Chomsky, on the other hand thinks that a different faculty of the mind (our Science Forming Faculty) is responsible for us developing our scientific theories so he argues that we don’t need to modify our language. Our ordinary language is vague for Chomsky but it is ok for the purposes of day to day communication. Chomsky only discusses the problems of conceiving of language as referential to illustrate what he conceives of as a poverty of stimulus for concept acquisition.

Chomsky’s primary point is that the words of our language are richly constrained by rules of the lexicon and that the meaning of a term is not to be discovered by pairing it with an external object in the world. He is making the point that no external object could have the properties which we attribute to our concepts such as being concrete and abstract at the same time. He cites approvingly Strawsons clsim that there is no such thing as a logically proper name. And he alludes to the fact that any act of naming will require a lot of stage setting (he cites Goodman explicitely) and he claims that this stage setting will be provided by our innate constitution.

Chomsky asks us what goes into refering to an object, he claims that such things, as design, intended and characteristic use and institutional role will be a decisive factor in deciding. He claims that if something looks like a book, but we are told that it was designed to be a paperweight we may come to think of it as a paperweight.  We can think of a book as being an abstract entity as in ‘War and Peace’ or a concrete entity as in a particular copy of ‘War and Peace’.

Chomsky believes that the proceeding discussion shows that the act of refering requires intensive mental architeciture. Chomsky makes the following point about reference:

The question, ‘to what does word X refer?’ has no clear sense, whether posed for Peter, or (more mysteriously) for some ‘common language’. In general, a word, even of the simplest kind, does not pick out an entity of the world, or of our ‘belief space’- which is not to deny, of course, that there are banks… The observations extend to the simplest referential and referentially dependent elements [pro-nouns, same, re build, etc]; or to proper names, which have rich semantic-conceptual properties derived in large part from our nature, with some overlay of experience. (NHLM p 181)

In the above section Chomsky is claiming something which seems somewhat bizzare; he is claiming that a question such as ‘to what does word x refer?’ is senseless, yet at the same time he is claiming that there are of course things like banks in the world.  This passage is difficult to interpret because Chomsky seems to be claiming that a question, ‘To what does the word ‘bank’ refer to? Is a senseless question, yet on the other hand he admits that banks do of course exist. So a sceptic could ask; Given that banks exist, why can’t we just claim that ‘banks’ refer to banks? The obvious reply to the above question is that if we take the word ‘bank’ on it own and say that it refers to bank, does this mean riverbank, lending bank, the abstract entity bank as a finantial institution, lending bank as in a particular building? This can only be decided by seeing how the word is used in sentences (its syntactic constraints), and by seeing the contexts the word is used in (pragmatic use of the word). So the reference of the world is determined by how the person uses the word, and the way  the person uses the word. In other words people refer not words.

Chomsky follows up his remarks on how we refer to bank by claiming that the act of naming is a kind of ‘world-making’, in Goodman’s sense of the word. However Chomsky believes that the world-making is made possible by our innate human nature not by experience. Chomsky notes at this point that his approach to semantic-interpretation has a traditional flavor to it and he likens it to the approach which was taken by seventeenth-century rationalist philosophers. For Chomsky according to rationalist philosophers claimed that we recieved the data of sense and interpreted it according to our innate cognoscitive powers. So for both Chomsky and the rationalists our interpretations of data was largely achieved as a result of our innate prowess.

Chomsky then goes on to quote Locke, Hume, Hobbes and Cudworth as exemplars of the view on semantic-interpretation which he favours.  This itself is a curious move considering that with the exception of Cudworth none of the above thinkers are infact rationalists. However given that we know from other writings that Chomsky knows that Hume and Locke were empiricists not rationalists we will not interpret him as making such a basic mistake. Chomsky’s quotes from Hobbes, Locke and Hume can be read as exemplifying the complexity of the ordinary concepts we use such as those of: tree, river, person etc. Furthermore Chomsky seems to be implying that concepts which such rich intricate structure cannot be learned from experience despite what the empiricist philosophers seem to think.

It is perhaps helpful at this point to illustrate some of the points which Hume, Locke, and Hobbes made about according to Chomsky:

A man will always be the same, whose actions and thoughts proceed all from the same beginning of motion, namely, that which was in his generation; and that will be the same river which flows from one and the same fountain, whether the same water, or other water, or something else than water, flow from thence (as in the classical case of the ship of Theseus); and that the same city, whose acts proceed continually from the same institution. ( NHLM, p. 183 quoting Hobbes)

This quote which Chomsky attributes to Hobbes is that the nature of identity whether of ships, or rivers, or cities can be traced to the point of their origin. So for example we cannot say that the river liffey and the river boyne are the same river because they originated from different sources. Chomsky then goes on to discuss how philosophers from Locke to Hume developed the concept of personal identity:

The inquiry into personal identity from Locke to Hume was concerned with organic unity, a broader notion. A tree or an animal ‘differs from a mass of matter,’ Locke noted by virtue of the ‘organization of parts, in one coherent body, partaking of one common life’ with ‘continued organization’ that comes from within, unlike artifacts. The identity of an oak resides in ‘a sympathy of parts’ contributing to ‘one common end’ of ‘support, nourismhment and propagation’ of the form, Shaftesbury added. Hume largely agreed, though taking ‘the identity, which we ascribe to the mind of men,’ and ‘the like kind… that we ascribe to vegetables and animal bodies,’ to be ‘only a fictitious one’ established by the imagination, not Shaftesbury’s ‘peculiar nature belonging to this form’ (NHLM, p.183)

So Chomsky goes from speaking sympathetically of Hobbes view that identity is to do with the source of the thing considered to move on to consider later views on the topic. He notes Locke’s view that the identity of a thing is concerned with its organic unity, and notes that Shaftesbury added to this account that a further criterion of calling something an entity is that it has a common end, e.g. propagation of the form. It is the Hume’s view that these identities we project onto things are just psychological constructs which particularly interests Chomsky.

Chomsky agrees with Hume on this point and thinks that when we look closely at the intricacy of concepts we will come to see that Hume is correct. He cites the work of Locke, to illustrate exactly how intricate and detailed concepts are. A person for Locke was a forensic term which had the following properties:

Person is a forensic term appropriating actions and their merit; and so belongs only to intelligent agents, capable of a law, and happiness and misery as well as accountability for actions and much else. (NHLM, p.183 Chomsky quoting Locke)

Chomsky then goes on to quote Locke’s views on the identity of a river which go beyond Hobbes views on the matter. Chomsky claims that such views on identity (which he considers a psychological matter not a metaphysical matter) barely skim the surface of what we know of concepts and that our understanding of these concepts can only be marginally be effected by experience. Once we realise the complex nature of these innate concepts our task in semantics becomes studying their nature, and how they interact with other faculties of the mind to yield our rich interpretation of the world. Our task is no longer to study the relation between concepts and the world.

One of the first points which come to mind upon reading Chomsky’s discussion of concepts is his reliance on complex philosophical theories of identity as a way of revealing incredible complexity of our concepts of person, tree, river etc. Chomsky views these complex concepts as being psychological ways of organising our experiences, and he claims that such philosophical exposition of concepts hardly skim the surface when compared with the knowledge that people implicitely have. According to Chomsky most of our conceptual knowledge is unconscious, this can be seen from the fact that ordinary language users can reliably distinguish between rivers, trees, and persons, as well as speak about such entities in a manner that has coherence and appropriatness to the situation, yet still be unable to give philosphical explications of such concepts.  So, for Chomsky, philosophers like Locke, Hobbes and Hume are doing their best to try and rationally reconstruct what it is that they implicitely know. So Chomsky can be fruitfully seen as a sort of inverse Hume. Hume believed that our ideas were faint copies of our impressions, wheras Chomsky believes that our ideas (concepts, lexical items) are innate rules which we unconsciously use to interpret the flux of experience.

Chomsky has often written about his picture semantics asthough it were diametrically opposed to Quine’s view on semantics. Here he criticises what he takes to be a modern consensus in the philosophy of language:

In general, there appears to be no reference  relation in human language and thought in the technical sense of Frege, Peirce, Tarski, Quine, and contemporary ‘externalist’ philosophy of language and mind. Referring is an action, and the internal symbols that are used to refer do not pick out mind-independent objects. On investegation, it turns out that what we understant to be a house, a river, a person, a tree, water, and so on, is not a physical construct of some kind. Rather, these are creations of what seventeenth century investigators called our ‘cognoscitive powers’, which provide us with a rich means to interpret and refer to the outside world from certain perspectives. (Chomsky Evolution of Human Language: p, 12)

We saw above that Quine actually agrees with Chomsky on the ambiguities of language and the vagueness of its reference. So Quine doesn’t hold the unrealistic conception of language reference that Chomsky attributes to him. Where the real difference lies between both thinkers is that Quine doesn’t use facts about the ambiguous vague nature of language to argue for innate concepts.

Quine’s magum opus Word and Object is primarily concerned with two things (1) How a child develops his overall theory of the world through learning his language, (2) How we can regiment this language to make its ontological commitments clear.  When Quine speaks of language being a social art which we learn from in our peers in intersubjectively conspicuous circumstances, he seems to be endorsing precisely the view of language which Chomsky is criticising. But as always in philosophy, the situation is much more complicated than this.

When Chomsky speaks of  concepts of ‘tables’, ‘trees’, and ‘people’ as being psychological concepts which we use to organise our experiences he is echoing claims which Quine made 50 years eariler in From a Logical Point of View.  In his third paper of From a Logical Point of View, called Identity, Ostension, and Hypostasis Quine spoke about the process of imputing identity to an object. He emphasised how ostensive definition would be massively undetermined by the data:

Pointing is of itself ambiguous as to the temporal spread of the indicated object. Even  given that the indicated object is to be a process with considerable temporal spread, and hence a summation of momentary objects, still pointing does not tell us which summation of momentary objects is intended,  beyond the fact that the momentary object at hand is to be in the desired summation. Pointing to a, if construed as referring to a time-extended process and not merely to the momentary object a, could be interpreted either as referring to the river Cayster of which a and b are stages, or as referring to the water of which a and c are stages, or as referring to any one of an unlimited numner of further less natural summations to which a also belongs. (LPV, p.67)

In other words ostensive definition requires a lot of stage setting. Quine points out that such difficulties are typically overcome by saying ‘this river’ in other words appealing a prior concept of river as one distinctive time consuming process, one distinctive form of summation of momentary objects.  At this point Quine notes we have moved beyond ostension and moved to the stage of conceptualisation. However he then asks us to assume that we do not already have the prior concept of ‘river’, and asks again how far we could get with ostension alone?

Quine sketches the following procedure:

What we may do then is point to a and two days later to b and say each time, ‘This is the Cayster’.The word ‘this’ so used must have referred not to a nor to b, but beyond to something more inclusive, identical in the two cases. Our specification of the Cayster is not yet unique, however, for we might still mean any of a vast variety of other collections of momentary objects, related in other modes than that of river kinship; all we know is that a and b, are among its constituents. By pointing to more and more stages additional to a and b, however, we eliminate more and more alternatives, until our listener, aided by his own tendency to favor the most natural  groupings has grasped the idea of Cayster.

Quine’s emphasis on ‘natural groupings’ shows that he allows some innate apparatus to explain concept learning, however his primary emphasis is reinforcement and trial and error.  Chomsky  on the other hand when discussing the complex ordinary language concept of a house simply uses the complexity of the concept to show that the concept must be innate.

Chomsky’s discussion aims to clarify what the meaning of the word is, and to emphasise how much information is included in such a supposedly simple concept,  he thinks that this complexity shows that the concepts cannot have been learned in a purely referentialist manner. One interesting property of the concept ‘house’ is that if we were to say that ‘John is painting the house’, we mean that he is painting the exterior of the house not the interior. Another connected feature of the concept of ‘house’ is the concept ‘brown house’, Chomsky notes that a universal feature of language is that ‘brown house’ will be interpretated as meaning that the exterior of the house is brown not the interior. One point to note about this claims Chomsky is that exterior of a surface is very important for understanding a term. Thus if I see a house I see its exterior surface, seeing its interior surface does not surfice. But he notes a house is not just its exterior surface, a geometric substance. If John and Mary are equi-distant to the exterior of a house but John is in the house and Mary is outside the house. We will say that Mary is near the house but not that John is. But he  notes the interior of  the house is abstractly concieved, the house will be the same house even if I fill it entirely with concrete.

These strange properties complex properties of the complex concept of house combined with the speed children supposedly acquire the concepts are used as evidence that the concepts are innate. I will not go into who is right and who is wrong on the question of concept acquisition (spoiler alert: neither of them are), here I merely wanted to show that both thinkers agree that our ordinary language is too vague and sloppy to be accorded any ontological significance. This is a problem for Quine who thinks that our overall scientific theories of the world are an interconnected set of sentences. Hence Quine has to try and modify language by translating it into the syntax of quantification. Chomsky who thinks our science forming faculty has little to do with our language faculty.

In my next blog I will discuss why Chomsky’s postulation of a science forming faculty has little evidence to support it. In the following one I will show that Quine’s attempt to translate scientific theories into the syntax of quantification has failed. Given that both of their explanations of scientific competence fail and their views on language show that language is too vague to house the precise theories of science, neither has a satisfactory theory of scientific competence. So in my final blog I will discuss how Paul Churchland’s (as interpreted by Matt Bush) theory of Hebbian Maps solves some of the difficulties facing Chomsky and Quine.

Intellectual Disability and Radical Translation: Latest Draft

Radical Interpretation and Intellectual Disability.

              Part 1:  The Principle of Charity and the Typical Mind Fallacy.

Different persons growing up in the same language are like different bushes trimmed and trained to take the shape of identical elephants. The anatomical details of the twigs and branches will fulfil the elephantine form differently from bush to bush, but the overall outward results are alike. (Word and Object, p. 8)

In institutions all around the world live people with severe intellectual disabilities. Such disabilities often go hand in hand with communication difficulties. People can have an inability to understand language through damage to the Wernicke’s area, while others can be born with an inability to speak language through damage to the Broca’s area. Obviously having an inability to speak or understand language does not make communication impossible, animals that have no linguistic abilities nevertheless are very skilled at communicating with each other (See Hauser 1996, and Tomasello 2008). People with profound intellectual disabilities sometimes have serious difficulties communicating their needs and interests. Carers whose role it is to provide help for people with intellectual disabilities to fulfil their life goals, and achieve a full life integrated in society at large, often have difficulties interpreting the attempted communication of the people they are trying to help. Even for two people without profound intellectual disabilities or communication impairments interpreting each other can be difficult. Our linguistic communication is shot through with metaphor and analogies (Lakoff and Johnston 1987, Hofstadter and Sander 2013). When two people converse with each other they share a background theory which they use to interpret what the other is saying to them, and to guide their own prospective speech-acts. Donald Davidson sums up this process as follows:

Here is a highly simplified and idealised proposal about what goes on. An interpreter has, at any moment of a speech transaction, what I persist in calling a theory. (I call it a theory, as remarked above, only because a description of the interpreter’s competence requires a recursive account.) I assume that the interpreter’s theory has been adjusted to the evidence so far available to him: knowledge of character, dress, role, sex, of the speaker, and whatever else has been gained by observing the speaker’s behaviour, linguistic or otherwise. As the speaker speaks his piece the interpreter alters his theory, entering hypotheses about new names, altering the interpretation of familiar predicates, and revising past interpretations of particular utterances in light of new evidence… To put this differently: the theory we actually use to interpret an utterance is geared to the occasion…The speaker wants to be understood, so he intends to speak in such a way that he will be interpreted in a certain way. In order to judge how he will be interpreted, he forms, or uses a picture of the interpreter’s readiness to interpret along certain lines. (A Nice Derangement of Epitaphs pp. 99)

Davidson is picking out a key component of what goes on in interpretation between two people. Both people come to the occasion of interpretation with their own prior theory, a background expectation about the world and the agents it contains. When conversing they try to guess what prior theory the person they are speaking to is using and they use their speech acts in ways they think are congenial with the other person’s prior theory. Ultimately, Davidson argues, understanding takes place when both thinkers converge on a passing theory. Davidson claims that accident aside a passing theory is where agreement is greatest (ibid p. 102). As people talk their prior-theories become more and more alike, and as a result so do their passing theories. Where agreement and understanding coincides so does the people’s passing theory. An important point to note is that Davidson’s theory of interpretation involves maximizing agreement with the person we are conversing with. For Davidson, without a massive shared background of truths, interpretation and hence communication will be impossible. He stresses the importance of maximising agreement in the quote below:

The methodological advice to interpret in a way that optimizes agreement should not be conceived as resting on a charitable assumption about human intelligence that might turn out to be false. If we cannot find a way to interpret the utterances and other behaviour of a creature as revealing a set of beliefs largely consistent and true by our standards, we have no reason to count that creature as having beliefs, or saying anything. (Radical Interpretation p. 137)

So, for Davidson, a shared background set of beliefs in common is a necessary condition of communication taking place. He also argues that it is important to keep in mind, not only the embodied presence of both people communicating with each other, but also to the shared objects of experience of both people. This triangulation of self, other, and shared object of experience is a necessary condition of communication taking place. Communication does not take place in a vacuum, and a theory of communication that ignores the world that both thinkers are communicating about will go badly wrong.

When Davidson speaks about triangulating on a shared object of experience, he is arguing that such triangulation is a necessary condition of communication. There is good empirical evidence which indicates that humans do in fact communicate with each other by triangulating on a shared object of experience (Cavell 2008). Such triangulation is typically facilitated by the way young children automatically track the eye movements of each other and understand what the meaning of pointing is. However this ability is not universal, children with autism are typically bad at tracking eye movement and interpreting the pointing gestures of others (Baron-Cohen 1995). Autistic children who do not also suffer from any kind of intellectual disabilities may experience language delays but with training usually develop language. However with children who suffer from intellectual disabilities and autism the possibility of developing language is greatly decreased. A substantial proportion of people who have an intellectual disability, do so because of some birth trauma, or because of toxins that the child in utero was inadvertently exposed to. So the disability which affects the child can have multiple effects throughout the brain. So, for example, as a result of an accident of birth a child could be born with an inability to understand speech, and an inability to track eye-movements and interpret the pointing gestures. Obviously communicating with a person like this will be extremely difficult, and facilitating communication will require a lot of careful thought on behalf of care givers.

It is important, at this point, to discuss some developmental phases children typically go through. A few hours after birth children are drawn to look longer at drawings of human faces than at other patterns (Fantz 1963).From 3 months children are monitoring the eye gaze of others. (Cohen 1995). From 4 months humans have some grasp of object behaviour (Baillargeon, et al., 1985). At 6 months children begin reaching for objects. At 9 months children begin to recognise intentionality. At 9 months proto-declarative referencing begins (Cohen 1995). At 10 months children begin to crawl. Between 9-12 months children begin to engage in triadic behaviours. At 12 months children begin pointing and this brings them deeper into the world of triadic relations. From 12- 16 months children begin to learn 0.3 words per day (Bloom, 2002). From 14-16 months children begin to walk. From 18-23 months children begin to learn 0.8 words per day. Between 18-24 months children recognize themselves in the mirror (Rochat and Straino, 1999). From18-24 months children display a deep understanding of others intentional aims. From 23- 30 months children learn 1.6 words per day. At 30 months children begin to master the distinction between Mass and Count nouns (Gordon, 1985). At 48 months children begin to pass the false belief test. From 4 to 6 years of age children begin explicit mentalising; justifying false beliefs by pointing out misleading reasons.  They also begin to understanding higher order mental states (Fritz and Cohen 1995) Up to the age of 6 years children prefer to organise things according to thematic relations .From 9 onwards children begin peak period of language acquisition learning up to 12 words a day.[1]

All of the above developmental facts are important to the child as they begin to learn their language and try to communicate with each other. Some of these developmental regularities are learned while some are the result of innate mechanisms. We have seen above that autistic children who do not go through the pointing phase have extreme communication difficulties. Considered in Davidsonian terms the children’s shared developmental history forms a part of their prior theory which they take for granted when they communicate with each other. Children with profound intellectual disabilities sometimes never pass through these various developmental phases. As a result of this their prior theory is so different than their peers that converging on a passing theory is virtually impossible. When this happens people with profound intellectual disabilities sometimes retreat into their own subjective worlds, and despite their carers best intentions never learn to communicate their needs.

As we saw above when discussing Radical Interpretation Davidson recommends applying what he calls the principle of charity. To do this we need to try and make maximum sense of the utterances of the people we are interpreting. We need to minimize the incoherencies which we attribute to the people who we are trying to interpret. So, for example, if when interpreting the utterances of an alien tribe we encounter we find ourselves ascribing to them a belief in true contradictions we should consider the possibility that we are simply misinterpreting them. It is after all a simpler assumption that we have misinterpreted them than that they hold radically anti-logical views. This principle of charity is a valuable heuristic device to help us when engaging in radical interpretation. Assuming that the person is a lot like us, and holds a lot of the same true beliefs about the world is very important if we want to engage with the person about shared objects in the environment. Shared intentionality (similar belief/desire psychology), and shared background knowledge about the world of ordinary enduring middle sized perceptual objects helps triangulation on a shared object of experience take place, and hence facilitates communication. However, as we saw above, with people with profound intellectual disabilities the idea of a principle of charity and prior theories converging as people talk with each other is of limited help.

When we apply Davidson’s approach we run the risk of interpreting people so closely by the lights of our own conceptual scheme that we bracket their otherness. So, for example, Davidson using the principle of charity argues that we should not interpret (or translate) people as believing in true contradictions. However, even within our own linguistic community we have people who believe in true contradictions. Graham Priest’s Dialetheism, is a good example of a theoretical position which accepts the existence of true contradictions. Matte Blanco in his ‘The Unconscious as infinite Sets’ argue that people’s unconscious is governed by a logic that sometimes accepts true contradictions. Blanco and Priest pose a serious difficulty for the Davidson’s principle of charity. An interpreter trying to understand an alien tribe using the principle of charity would end up falsifying the data if they were interpreting an alien version of Priest or Blanco.

A real difficulty with interpreting another is that it is necessary to assume that the person one is interpreting is in most ways similar to you, without making this assumption it is virtually impossible to interpret what the other is saying, however  as we saw above this strategy risks us falsifying the data. William James was sensitive to the problem of incorrectly interpreting others because we assume that their minds have the same structure as ours; he called this The Typical Mind Fallacy. He noted how philosophers like Berkeley and Locke misinterpreted each other because they wrongly assumed that all minds were wrongly structured in the same way as their own. The Typical Mind Fallacy led to disputes between Locke and Berkeley on the nature of abstract ideas. This dispute could have been resolved, if they had not wrongly assumed that all people have the same capacity for mental Imagery (See Berman, (2008) and Berman and King (2014)).

So when trying to correctly interpret the communication of another while it is necessary to try and assume that they are mostly like us we must be aware that this tool can lead us astray, me must recognise difference when it manifests itself. For example, children with Williams-Syndrome typically have a different intuitive folk biology to ordinary children[2]. So if someone were to apply a strict principle of charity to the interpretation of those with William-Syndrome this will impede communication rather than help it. So it is important to note that at the beginning of interpreting another person the possibility of missing essential detail is always possible. Amongst people who do not suffer from profound intellectual disability, people sometimes misinterpret each other, because they wrongly believe that the other has a mind just like ours. So, for example, for years colour blindness was not noticed, variation in people’s ability to form mental images was ignored, the existence of synaesthesia (e.g. people who taste colours) was denied. These differences were ignored despite the fact that people could communicate their ideas to each other through language. Given these facts it seems likely that we are not aware of the mode of experience of a lot of people with profound intellectual disabilities who we care for, most of whom do not even have the ability to communicate their experiences to us through language.

In 1968 Russian Neuroscientist wrote his famous The Mind of a Mnemonist about S. V. Shereshevskii who had and almost unlimited memory as a result of his synaesthesia. Luria’s book was important because he not only discussed the neurological and psychological abilities of Shereshevskii but he also told the story about his subjective experience of the world. Contemporary neuroscientist Oliver Sacks, who was heavily influenced by Luria has followed on in this tradition and over the last 40 years has written a series of case studies describing the subjective experience of people suffering from a variety of neurological disorders. Like Sacks and Luria, Vilayanur S Ramachandran has written some beautiful case studies describing the subjective experience of people with various different neurological disorders. What these case studies have revealed is a variety of subjective experiences which are simply amazing and tragic at the same time. So, for example, there are people called prosopagnosiaics who become face-blind, they lose the ability to recognise even their own wives faces. On the other side of the coin there are people with Capgras delusion who can recognise people’s faces, but because of damage to the connection between the amygdala and the inferotemporal cortex, are no longer capable of attaching the same emotional significance to the faces and develop the delusion that their family and friends are imposters. There are people who acquire types of Amusia which makes them find the sound of music intolerable. Sometimes these disorders and ones like them are acquired (through brain damage or a degenerative neurological disorder), while other times they are congenital. In the case of people with profound intellectual disability who have difficulty communicating; disorders like these can go undiagnosed. Behaviour which seems odd may be explained as the result of some kind of acquired disorder. However arranging an MRI or a CT scan can be difficult as the patient may not be able to sit still when being scanned and there may be difficulties in sedating the patient as a result of other meds they are on. Also sometimes the waiting list for an MRI or CT can take months or typically even a year till they can be provided. In the meantime measures need to be taken to help the person with Intellectual disability lead a fulfilling life and rich life.

So far we have seen that Davidson’s principle of charity as a heuristic tool is useful to interpreting the behaviour of others however it runs the risk of ignoring radical otherness. In his “Unified Theory of Thought and Action” Davidson employs the two formal tools in interpreting the verbal behaviour of the other he uses Bayes Theorem and Tarski’s Theory of Truth. His formal treatment of interpretation is very useful and important and can be applied to data gathered by carers on the ground.

The carer though will go astray if they stick to strictly to Davidson’s principle of charity. It is important to realise that people whose brains are structured radically different from those of others may not enjoy or want the same things as most other people in society. So, for example, a person with intellectual disabilities who is suffering from certain types of Amusia will obviously not like being brought to concerts etc. We can typically judge whether this person who has no language likes music by attending to their behaviour in certain circumstances. Now obviously observing the environmental conditions which cause unhappy or happy behaviour is a minimal requirement for a carer. However, what is important is not just exploring what environmental conditions lead to what behaviours, but using these facts to construct an accurate cognitive map of the person whose behaviour we are interpreting. This cognitive map can be used to help predict the behaviour of the person being cared for and help them experience the world as they choose to.

Everything I have said here may seem to be blindingly obvious. However, I think the important point I want to make is the need to construct accurate models of the people whose behaviour we are interpreting, and to use these models to help us to construct humanistic case studies of the people we are caring for in the mode of Luria, and Sacks. These stories which we construct with the people we are caring for help them to be understood and in a sense join our world as they are. We need to try and rationally reconstruct how the people with profound intellectual disability we are caring for experience the world. We do this by studying their developmental history and seeing what developmental phases they have passed through, and what phases they not passed through; we need to study their actual behaviour in their daily environment, to use any neurological and psychological data we have to discover how they see the world. With these facts in mind we can use the Bayesian analysis and Tarski’s Theory of Truth to construct a model of the beliefs and desires of the people we care for. We can then translate these models into humanistic stories which we use to guide us as we interact with our clients on a daily basis.

 Part 2 Interpreting the Mind of the Other

Beneath the uniformity that unites us in communication there is a chaotic personal diversity of connections, and, for each of us, the connections continue to evolve. No two of us learn our language alike, nor, in a sense, does any finish learning it while he lives.

(Word and Object, p. 12)

When you meet another and you want to interpret their behaviour it is best to begin externally to try and triangulate on shared objects of experience in the external world. Doing this is difficult, typically we will use the language of the eyes or pointing gestures to help us latch onto or refer to objects in the external world. As we saw in our last section the majority of children from about the age of 12 months will point to things they want their care giver to get for them. Children follow the direction of their parents gaze and use their own gaze to indicate shared objects in their own environment. This is the beginning of a two year process where the child develops shared intentionality. Again not every child goes through goes through this phase; for example some autistic children are severely developmentally delayed and do not develop shared intentionality as quick as their peers, if at all.

One of our key entering wedges to understanding each other is our ability to triangulate on shared objects of experience. Another key is our ability to say yes, and no. People who through motor damage to their vocal cords or who have severe Broca’s Aphasia cannot speak, but they usually have the ability to assent or dissent to propositions spoken to them. Such people can use their heads to nod assent or to shake their heads to indicate dissent. People who have locked in syndrome for whatever reason: Motor Neurone Disease, a severe stroke, or some debilitating accident, can be trained to indicate yes and no through blinking of the eyes: once to indicate yes and twice to indicate no[3]. Trying to communicate with people with Wernicke’s Aphasia is much more difficult and is best achieved by focusing on objects in the external world and using gestures as much as possible.

With people who have specific diagnoses such as Wernicke’s Aphasia, Broca’s Aphasia, Autism, or an inability to speak because of motor difficulties: e.g. severe Cerebral Palsy, speech and language therapists will have set programmes set which have worked (as well as can be expected) in the past. However you don’t always have a clear diagnosis to work with and sometimes your client will have multiple diagnoses, and a personal idiosyncratic history only a tiny portion of which may be captured in care plans and medical notes. So here it is up to the carer who is with the client on a daily basis to try and rationally reconstruct the inner world of the client so a high quality of life can be maintained.

However this task should not be underestimated, we have seen that typical wedges into language may not be available. People may find it difficult to triangulate on a shared object of experience, they may not automatically track eye direction, and if their understanding of language is severely impaired or non-existent then we may not even be able to get the subject to assent of dissent to propositions verbally. However this should not be taken to be an object of despair, we can still track the subjects interests and desires by careful study of what they tend to orientate towards and away from. Though we may need to facilitate this process by providing new opportunities for the client on a daily basis to try and ascertain what aspects of reality they like and dislike.

However, even when we do not face the above problems, interpretation is not as easy as it may seem. We will still face the problem of Underdetermination (henceforth UD). UD is a phenomenon which philosophers have been working on for years. The basic idea of UD is that our total observational data is consistent with more than one theory. Underdetermination has been discussed in detail by philosophers like Quine, Wittgenstein, and Goodman. Linguists like Chomsky have argued that the Underdetermination that faces a child as they learn their first language can only be overcome by postulating innate apparatus.

Most contemporary cognitive scientists agree with Chomsky on this point. I will now cite three of these thinkers to give you a flavour of what how they understand UD. In his book The Language Instinct, Stephen Pinker discusses UD:

A rabbit scurries by, and a native shouts, ‘Gavagai’ What does gavagai mean? Logically speaking, it needn’t be ‘Rabbit’. It could refer to that particular rabbit (Flopsy, for example). It could mean any furry thing, any mammal, or any member of that species of rabbit (say, Oryctolagus), or any other member of that variety of that species (say, chinchilla rabbit). It could mean scurrying rabbit, scurrying thing, rabbit plus the ground it scurries upon, or scurrying in general. It could mean footprint-maker, or habitat for rabbit fleas. It could mean the top half of a rabbit, or rabbit meat on the hoof, or possessor of at least one rabbit’s foot. It could mean anything that is either a rabbit or a Buick. It could mean collection of undetached rabbit parts, or ‘Lo Rabbit hood again, or ‘It rabbiteth’, analogous to ‘it raineth’. The problem is the same when the child is the linguist and the parents are the natives…Figuring out which word to attach to which concept is the Gavagai problem, and if infants start out with concepts corresponding to the kind of meanings that languages use, the problem is partly solved. (1994, 156)

Pinker then goes on to cite the research of the psychologist Ellen Markman to show that it is indeed the case that children are born with concepts which constrain the types of meanings that words can be given. It is Pinker’s view that Quine’s problem is a problem of UD which is solved by the fact that the concepts that children are born with will place innate constraints on the type of meanings that children can attach to words.

The child psychologist Paul Bloom draws a similar conclusion to Pinker:

These problems of reference and generalization are solved so easily by children and adults that it takes philosophers like Quine and Goodman to even notice that they exist. If we see someone point to a rabbit and say ‘gavagai’, it is entirely natural to assume that this is an act of naming and that the word refers to the rabbit and should be extended to other rabbits. It would be mad to think that the word refers to undetached rabbit parts or rabbits plus the Eiffel Tower. But the naturalness of the rabbit hypothesis and the madness of the alternatives is not a logical necessity; it is instead the result of how the human mind works. (2000, 5)[4].

 

The Harvard linguist Cedric Boeckx echoes the claims of the above thinkers:

Yet, if you think about it, the Gavagai situation is the one we all faced as children trying to acquire the meaning of words. How did we guess that elephant refers to that big grey animal with a long trunk? Because someone pointed to the animal and said elephant? But how did you know what exactly was being pointed at? Surely the finger couldn’t point at the whole elephant; it was your cognitive bias that interpreted the act of pointing in that way. (2010, 41)

There is a lot of debate in contemporary cognitive science as to whether Pinker et al. are justified in postulating innate concepts to overcome UD. However, when it comes to dealing with people who have profound intellectual disabilities, and have not learned to speak, we really need to be careful not to impose our own ontology onto then. Their way of interpreting the world might be radically different from ours, we cannot be sure that they share the same innate apparatus as us, so we must proceed with caution.

                                                Part 3: The Web of Belief

The approach to the problems of meaning, belief, and desire that I have outlined is not, I am sure it is clear, meant to throw any light on how in real life we come to understand each other, much less how we understand our first concepts and our first language. I have been engaged in a conceptual exercise aimed at revealing the dependencies among our basic propositional attitudes at a level fundamental  enough to avoid the assumption that we can come to grasp them-or intelligibly attribute them to others-one at a time. My way of performing this exercise has been to show how it is in principle possible to arrive at all of them at once.  (Problems of Rationality, p. 166)

When using Davidson’s views on the theory of interpretation we need to be careful not to confuse Davidson’s project with the one I am interested in. My project is primarily a pragmatic one; I am concerned coming up with basic principles which are useful in helping to interpret the behaviour of people with profound Intellectual Disability. Davidson, on the other hand, is concerned with constructing a theory of meaning, and interpretation. Because of our different purposes I use Davidson’s project as needed and dismiss what is not useful to my project. This is no criticism of Davidson’s project as he intended his theories to deal with entirely different problems.

One proviso I should make at the outset is that Davidson limits himself to verbal communication alone. This is because of his belief that we are not justified in attributing propositional attitudes to non-linguistic animals. I will not be following Davidson in this. There is ample evidence from ethno-science that animals without complex language can think using propositional-attitudes.

In his 1987 “A Nice Derangement of Epitaphs” Davidson makes the following claim:

We may say that linguistic ability is the ability to converge on a passing theory from time to time-this is what I have suggested and I have no better proposal. But if we do say this, then we should realise that we have abandoned not only the ordinary notion of a language, but we have erased the boundary between  knowing a language and knowing our way around in the world generally. A passing theory is really like a theory at least in this, that it is derived by wit, luck, and wisdom from a private vocabulary and grammar, knowledge of the ways people get their point across, and rules of thumb for figuring out what deviations from the dictionary are most likely. There is no more chance of regularising or teaching, this process than there is of regularising or teaching the process of creating new theories to cope with the new data in any field-for that is what this process involves ( A Nice Derangement of Epitaphs p. 107)

I think that he is correct on this point, and I think it contradicts a lot of his other claims about the necessity of language in order for a creature to have propositional attitudes. On the Davidsonian picture in his “A Nice Derangement of Epitaphs” having a language isn’t a black and white issue, where we can say a person has a language can be considered a sliding scale. I think this is accurate. Do we say a person with Broca’s aphasia has language, or does a person with Wernicke’s aphasia have a language? To a certain degree we can answer yes to the preceding questions and to a certain degree no. Everyone agrees that a deaf person who uses sign language has a language, but what about a person who can use gestures to communicate their needs but who has not mastered the complex sign language that most of their peers do. Again we can answer yes and no to these questions.

Consider Lamh a type of sign language used by children or people with intellectual disabilities. This sign language uses a few basic symbols some of which can be combined with each other; these symbols help the children communicate their basic needs to each other. Now in an obvious sense Lamh is not a language. It lacks the recursive ability to combine a finite amount of symbols into an infinite series; it lacks the complex and grammars typical of the world’s languages. The words used in the language lack the sometimes complex ambiguous context sensitive features of typical language, they lack some of the productive morphological features we find in ordinary language words, and the words are not really compositional. So there is a clear sense in which we can argue that Lamh is not a language. Lamh clearly isn’t an internal generative procedure which we use to link sound and meaning.  So it isn’t a language in the sense that Chomsky uses the term.

Lamh is on the other hand an attempt to communicate using signals. A person using Lamh is using the signal with the intention that the person they are communicating with shares the same meaning of the term that they do, and hence will understand them. A person with profound intellectual disability who cannot use Lamh can still try to communicate by indicating objects in the world and saying yes and no if people interpret them correctly or incorrectly. Likewise, a person who cannot move their bodies but who can understand what others are saying to them can assent and dissent to what is being said to them has a language. My definition of language is any intentional attempt to communicate with another. My more expansive use of language is more intuitive than Chomskies, in that it captures all intentional attempts at communication with others. My definition is not really at odds with Chomsky, people can define the word ‘language’ as they please, using language in my expansive way makes sense for my project because it nicely captures the various different communicative abilities of people with different forms of aphasia[5].

So I think that given Davidson’s (1987) definition of language, his statement that only of creatures with a language can we sensibly attribute propositional attitudes, becomes less controversial. Anybody that is engaging with the world and others, using intentional signals, in however limited a way, has a language on this Davidsonian model[6].

With the preliminaries out of the way we can now begin to outline the model for interpreting people with profound intellectual disabilities. Davidson notes correctly that when we are trying to understand linguistic utterances we need the locutions of intentional action, belief, desire, meaning, hopes, wishes, attempts etc.  These intentional locutions are the background conditions which are needed in order for speech to occur. For Davidson, meaning, belief, and desire are three interlocking terms which cannot be reduced to the other. I think that Davidson is correct on this general point. When we try to interpret the behaviour and communicative intent of people speaking to us we must help ourselves to these interlocked concepts.

Davidson grounds his Unified, Theory of Thought, Meaning and Action, on Ramsey’s decision theory. Ramsey’s Bayesian Decision Theory relies on two basic intensional notions: belief and preference.  This Ramsey picture tries to make sense of behaviour by assuming that people being interpreted are rational agents who act in their own best interests. We assume that a person attaches a value to a particular course of action, which is calculated by the outcome that will be achieved by performing an action, and the likelihood that the action will achieve the desired outcome. We do not have to make the assumption that the subject is consciously working out this formula, we only need to assume that the brain is doing the relevant calculations. Ramsey’s Bayesian Decision Theory typically operates in the form of getting subjects to gamble on certain outcomes and judging their beliefs and preferences based on how they choose. This method is a bit complex for the task of interpreting the behaviour of people with intellectual disabilities. So I suggest instead of using gambling we can use the simpler Davidsonian method of testing for assent and dissent to various propositions spoken about the external world.

Davidson correctly notes that “A sentence held true plus interpretation is equal to a belief” (ibid p. 156). So if a rabbit passes by and I say (1) ‘rabbit’ and my client assents, we can conclude that he believes that ‘(1)’ is true. Now to say that a person holds (1) True, means that they believe that (1) is true. If they believe that (1) is true then it follows that they understand the meaning of (1). So, here we can see how we can get meaning and belief out of someone assenting to a sentence. However, if I spent the day with a client questioning him, and determined a list of sentences which he held true, and which sentences he held false, I would have a lot of data but very little theory to explain the data. I would be able to derive meaning and belief from evidence of what environmental factors cause a client to hold a sentence true. Obviously this process is still subject to the massive Underdetermination that we discussed earlier. But we have little choice but to impose our ontology onto the child at the beginning as we begin to interpret what sentences they are assenting to. So, for example, we must assume that they assent ‘Rabbit’ as in ‘a particular rabbit’ as opposed to rabbit as in ‘undetached rabbit part’, ‘particular instance of universal rabbit hood’ etc. As we move into assessing how the various sentences they assent to connect to each other we will remove some indeterminacy. However, at the end of any process of interpretation some indeterminacy will remain.

As we saw above our process of interpretation will involve some degree of imposing our rationality onto the people we are interpreting. So should not interpret people as accepting contradictions, or denying the law of identity etc. Again there are instances where people do accept contradictions, for example, in dreams people report accepting contradictions and breaking the law of identity. The psychologist and logician Matte Blanco takes this to indicate at a deep unconscious level we follow different logical laws. However, outside, of these esoteric examples people typically follow the same logical laws. Anyway we have little choice when we begin to interpret our clients but to accept that they are using the same logical laws as us.

It is best to think of our process of interpreting our clients as uncovering what Quine calls their web of belief. The person we are interpreting has an interconnected set of sentence like beliefs which are related to each other in terms of logic entailment. Sentences which are at the interior of the web are our deepest held truths which we are most wedded to, while sentences at the periphery are our more speculative beliefs. Since these sentences are all indirectly connected to each other, changing the truth value of any sentence will have consequences for the rest the web of belief. We assume that the person holds a web of belief which is somewhat like ours at the core, in terms of ordinary enduring middle sized objects (trees, mountains) and logical laws obeyed.  The core of the web of belief can equated with what Davidson calls a prior theory. Davidson correctly notes that when interpreting the behaviour of another we might not be able to make these principles definite but we can at least form heuristic rules of thumb.

When trying to explicate another person’s web of belief, an important piece of evidence to be discovered, is the degree to which a subject will believe that one sentence supports another.  So, for example, suppose a subject accepts that sentence (1) The cat is on the mat. Is true. Then an important step is to ask the subject to what degree they think (2) The mat is under the cat. Supports (1) Or to what degree does (3) The cat likes sitting on the mat. Support (1). By understanding how much a subject thinks a series of sentences support or do not support another sentence we can assign numbers and understand the degree to which a person believes the original sentence to be true.  And using this model we will be able to predict what beliefs our client will likely hold on various different topics. We can further develop this model to help understand their preferences relative to their beliefs, once we have this done will be in a position to facilitate them as they interact with their respective worlds.

Part 4: Being-in-the-World

Much of modern philosophy, following from Descartes in the seventeenth century, has viewed the mind as a disembodied entity.  When philosophers have considered the mind they have viewed it as an inherently private entity. This perspective lead to a problem called the problem of other minds. Philosophers since Descartes have been asking how we can know that other people have minds, how do we justify the ascription of minds to others. The answer which Descartes came up with was that we are justified in attributing a mind to a creature if they have a language and can use it to converse. In recent years the brilliant mathematician and inventor of the computer Alan Turing claimed that if a computer was able to pass a Turing Test[7], then we would be justified in saying that it was conscious. The Turing Test is a test which involves a computer programme hidden behind a screen and a person hidden behind another screen. To administer the test a person is given computer of their own and can type any question to the hidden computer and the hidden person. If the person is not able to decide whether it is the computer or the person he is talking to, then it is argued that the computer must be conscious. The argument being that if a computer can converse on all topics as well as a person then there is no sensible reason to deny the computer consciousness.

Both the Turing Test, and Descartes test, make language a central key to discover whether others have minds and are conscious. Neither test mentions embodiment. This omission is strange given that humans are by nature always embodied; all people are embodied and related to the world from some aspect. Furthermore human thought is intentional; it is about something beyond itself. A test that ignores the embodied nature of thought and the intentional nature of thought will miss a key aspect of thought. Furthermore looking at the attribution of mind and conscious to others as a test is wrongheaded in the first place. Rather our normal developmental course is to assume intentional agency when creatures act contingently with us and try to engage with us about the shared world we experience. It is only when people have a developmental disorder such as autism that they fail to attribute intentional agency and triangulate with their peers.

When dealing with people with profound intellectual disabilities we should not start off like Descartes or Turing, looking for verbal behaviour to give us signs of some hidden consciousness, rather we should begin with the assumption that they are embodied agents engaged with our shared world of experience with us. One of the key insights of Occupational Therapy is the idea that a substantial part of who we are is constituted by what we do. Our every day practices help in making us who we are. People with intellectual disability typically have a key worker who helps their client to decide on what activities they want to engage in on a weekly basis.  This person centred care approach relies on the carer working closely with the client who they are key workers for and helping to facilitate organising activities for the client. This work is vital as a large part of who we are is dictated by our daily interactions with the external world. A person who has very limited interactions with the world outside them could end up having an impoverished mode of being.

However, one difficulty which a lot of carers have is that they find it difficult to know what it is that their client actually enjoys doing. If the client can say ‘yes’ or ‘no’ to your  questions then you can ask them whether they want to engage in activity X or Y. One difficulty with this approach is that the client is reliant on you to suggest options. If you begin with a limited set of options, you risk inadvertently boxing the client in with a few forced options.   Given a limited choice of activities the client may end up choosing X because it is slightly more preferable to Y. As a result of limited binary choices the client will end up doing one of a few things every day. A habit will build up and the child may end up having his world shrink, his daily routine predicable from one day to the next. The mind which isn’t some fixed point hidden from view, but rather a series of dispositions to engage in various activities with others as we interact with the shared world of experience, loses its potential for growth if not nurtured. It is therefore vital that the carer has the skills to help the client make new interesting choices which result in the client having new experiences which help the client grow.

The techniques which I discussed earlier in the paper will help the carer hugely in this process. Instead of just passively asking the client what they want we build up an accurate model of their cognitive processes. We need to firstly have a model of their likes and dislikes, an abstract model of how they perceive the world, this model of how they perceive the world needs to beware of committing the Typical Mind Fallacy. There is no fixed way of avoiding the Typical Mind Fallacy we simply have to keep our eyes open to behavioural differences any evidence of neurological differences etc. We have to do our best to interpret our clients in-terms of logical inference as possible. By questioning them and working out the degree they think one sentence they believe to be true supports another sentence we are giving ourselves a model of not only what they want in a particular circumstance but a model of  would want in a different circumstance. Our model will not be perfect but it will help us to construct a series of things we think the client would want to do based on our worked out theory of their preferences. The primary benefit of the model is that it will be productive; various different sentences which are believed true, held to be important can be combined to give us a potential infinity of sentences to put to the client. With this model we will not be forced into simple binary choices.

There are obvious difficulties with this model, namely collecting the data. Finding out what the client wants involves questioning them on a variety of different topics, seeing what sentences they think are true, what sentences they think supports another sentence, and to what degree it supports the other sentence etc. A client will spend may have spent a substantial proportion of their day at a day-service and may be tired upon returning home. To expect such a client to engage in a long series of questions may be to expect too much. Any intervention in the form of questions will obviously involve a strategy worked out with the client, the client’s family member, and a team of professions working with the client, e.g. Social Worker, Nurse, Psychologist, Speech and Language Therapist, Occupational Therapist etc.  This strategy will likely involve a non-invasive collection of data by talking with the client bit by bit over a long period of time, and constructing your model over this period. This model will be used on a daily basis and modified as we learn more and more. It is important that the model doesn’t remain a series of numbers on a page representing preferences; we need to translate the model into the client’s story, of how they experience the world. With this model in place we will be in a position to help the clients choose more and more interesting activities for them to do. The client’s key worker will be in a position to facilitate the client as he interacts with the world, instead of presenting the client with a few limited choices.


[1][1] My summary of the developmental stages children go through during the first 9 years of their life is taken from Tomasello “Constructing a Language” (2003), and Paul Bloom “How Children Learn the Meaning of Words 2002)

[2] See Johnston and Carey (1998) Knowledge and Conceptual Change in Folk-Biology: Evidence from Williams Syndrome.

[3] Direct brain computer interfaces exist at the moment which people with locked in syndrome of whatever kind can use to communicate with. However the availability of these devices at this time is extremely limited.

[4] While Pinker’s and Bloom’s arguments on this narrow point are similar, obviously this does not imply that they hold the exact same view on the nature of the mind.

[5] For a detailed discussion of how Chomsky defines ‘language’ see his Knowledge of Language (1986).

[6] Again Davidson probably wouldn’t agree with me on this point. However, it is not my purpose to defend all aspects of Davidson’s philosophy.

[7] Turing suggested the test but did not really take it seriously, however Artificial Intelligence theorists,  Philosophers, and Psychologists take the test very seriously, and Microsoft hold a competition every year offering big money engineers who can build computers that can pass the test.

Intellectual Disability and Radical Translation: Part 1

Radical Interpretation and Intellectual Disability: The Principle of Charity and the Typical Mind Fallacy.

Different persons growing up in the same language are like different bushes trimmed and trained to take the shape of identical elephants. The anatomical details of the twigs and branches will fulfil the elephantine form differently from bush to bush, but the overall outward results are alike. (Word and Object, p. 8)

In institutions all around the world live people with severe intellectual disabilities. Such disabilities often go hand in hand with communication difficulties. People can have an inability to understand language through damage to the Wernicke’s area, while others can be born with an inability to speak language through damage to the Broca’s area. Obviously having an inability to speak or understand language does not make communication impossible, animals that have no linguistic abilities nevertheless are very skilled at communicating with each other (See Hauser 1996, and Tomasello 2008). People with profound intellectual disabilities sometimes have serious difficulties communicating their needs and interests. Carers whose role it is to provide help for people with intellectual disabilities to fulfil their life goals, and achieve a full life integrated in society at large, often have difficulties interpreting the attempted communication of the people they are trying to help. Even for two people without profound intellectual disabilities or communication impairments interpreting each other can be difficult. Our linguistic communication is shot through with metaphor and analogies (Lakoff and Johnston 1987, Hofstadter and Sander 2013). When two people converse with each other they share a background theory which they use to interpret what the other is saying to them, and to guide their own prospective speech-acts. Donald Davidson sums up this process as follows:

Here is a highly simplified and idealised proposal about what goes on. An interpreter has, at any moment of a speech transaction, what I persist in calling a theory. (I call it a theory, as remarked above, only because a description of the interpreter’s competence requires a recursive account.) I assume that the interpreter’s theory has been adjusted to the evidence so far available to him: knowledge of character, dress, role, sex, of the speaker, and whatever else has been gained by observing the speaker’s behaviour, linguistic or otherwise. As the speaker speaks his piece the interpreter alters his theory, entering hypotheses about new names, altering the interpretation of familiar predicates, and revising past interpretations of particular utterances in light of new evidence… To put this differently: the theory we actually use to interpret an utterance is geared to the occasion…The speaker wants to be understood, so he intends to speak in such a way that he will be interpreted in a certain way. In order to judge how he will be interpreted, he forms, or uses a picture of the interpreter’s readiness to interpret along certain lines. (A Nice Derangement of Epitaphs pp. 99)

Davidson is picking out a key component of what goes on in interpretation between two people. Both people come to the occasion of interpretation with their own prior theory, a background expectation about the world and the agents it contains. When conversing they try to guess what prior theory the person they are speaking to is using and they use their speech acts in ways they think are congenial with the other person’s prior theory. Ultimately, Davidson argues, understanding takes place when both thinkers converge on a passing theory. Davidson claims that accident aside a passing theory is where agreement is greatest (ibid p. 102). As people talk their prior-theories become more and more alike, and as a result so do their passing theories. Where agreement and understanding coincides so does the people’s passing theory. An important point to note is that Davidson’s theory of interpretation involves maximizing agreement with the person we are conversing with. For Davidson, without a massive shared background of truths that people share, interpretation and hence communication will be impossible. He stresses the importance of maximising agreement in the quote below:

The methodological advice to interpret in a way that optimizes agreement should not be conceived as resting on a charitable assumption about human intelligence that might turn out to be false. If we cannot find a way to interpret the utterances and other behaviour of a creature as revealing a set of beliefs largely consistent and true by our standards, we have no reason to count that creature as having beliefs, or saying anything. (Radical Interpretation p. 137)

So, for Davidson, a shared background set of beliefs in common is a necessary condition of communication taking place. He also argues that it is important to keep in mind, not only the embodied presence of both people communicating with each other, but also to the shared objects of experience of both people. This triangulation of self, other, and shared object of experience is a necessary condition of communication taking place. Communication does not take place in a vacuum, and a theory of communication that ignores the world that both thinkers are communicating about will go badly wrong.

When Davidson speaks about triangulating on a shared object of experience, he is arguing that such triangulation is a necessary condition of communication. There is good empirical evidence which indicates that humans do in fact communicate with each other by triangulating on a shared object of experience (Cavell 2008). Such triangulation is typically facilitated by the way young children automatically track the eye movements of each other and understand what the meaning of pointing is. However this ability is not universal, children with autism are typically bad at tracking eye movement and interpreting the pointing gestures of others (Baron-Cohen 1995). Autistic children who do not also suffer from any kind of intellectual disabilities may experience language delays but with training usually develop language. However with children who suffer from intellectual disabilities and autism the possibility of developing language is greatly decreased. A substantial proportion of people who have an intellectual disability, do so because of some birth trauma, or because of toxins that the child in utero was inadvertently exposed to. So the disability which affects the child can have multiple effects throughout the brain. So, for example, as a result of an accident of birth a child could be born with an inability to understand speech, and an inability to track eye-movements and interpret the pointing gestures. Obviously communicating with a person like this will be extremely difficult, and facilitating communication will require a lot of careful thought on behalf of care givers.

It important, at this point, to discuss some developmental phases children typically go through. A few hours after birth children are drawn to look longer at drawings of human faces than at other patterns (Fantz 1963).From 3 months children are monitoring the eye gaze of others. (Cohen 1995). From 4 months humans have some grasp of object behaviour (Baillargeon, et al., 1985). At 6 months children begin reaching for objects. At 9 months children begin to recognise intentionality. At 9 months proto-declarative referencing begins (Cohen 1995). At 10 months children begin to crawl. Between 9-12 months children begin to engage in triadic behaviours. At 12 months children begin pointing and this brings them deeper into the world of triadic relations. From 12- 16 months children begin to learn 0.3 words per day (Bloom, 2002). From 14-16 months children begin to walk. From 18-23 months children begin to learn 0.8 words per day. Between 18-24 months children recognize themselves in the mirror (Rochat and Straino, 1999). From18-24 months children display a deep understanding of others intentional aims. From 23- 30 months children learn 1.6 words per day. At 30 months children begin to master the distinction between Mass and Count nouns (Gordon, 1985). At 48 months children begin to pass the false belief test. From 4 to 6 years of age children begin explicit mentalising; justifying false beliefs by pointing out misleading reasons.  They also begin to understanding higher order mental states (Fritz and Cohen 1995) Up to the age of 6 years children prefer to organise things according to thematic relations .From 9 onwards children begin peak period of language acquisition learning up to 12 words a day.[1]

All of the above developmental facts are important to the child as they begin to learn their language and try to communicate with each other. Some of these developmental regularities are learned while some are the result of innate mechanisms. We have seen above that autistic children who do not go through the pointing phase have extreme communication difficulties. Considered in Davidsonian terms the children’s shared developmental history forms a part of their prior theory which they take for granted when they communicate with each other. Children with profound intellectual disabilities sometimes never pass through these various developmental phases. As a result of this their prior theory is so different than their peers that converging on a passing theory is virtually impossible. When this happens people with profound intellectual disabilities sometimes retreat into their own subjective worlds, and despite their carers best intentions never learn to communicate their needs.

As we saw above when discussing Radical Interpretation Davidson recommends applying what he calls the principle of charity. To do this we need to try and make maximum sense of the utterances of the people we are interpreting. We need to minimize the incoherencies which we attribute to the people who we are trying to interpret. So, for example, if when interpreting the utterances of an alien tribe we encounter we find ourselves ascribing to them a belief in true contradictions we should consider the possibility that we are simply misinterpreting them. It is after all a simpler assumption that we have misinterpreted them than that they hold radically anti-logical views. This principle of charity is a valuable heuristic device to help us when engaging in radical interpretation. Assuming that the person is a lot like us, and holds a lot of the same true beliefs about the world is very important if we want to engage with the person about shared objects in the environment. Shared intentionality (similar belief/desire psychology), and shared background knowledge about the world of ordinary enduring middle sized perceptual objects, helps triangulation on a shared object of experience take place, and hence facilitates communication. However as we saw above with people with profound intellectual disabilities the idea of a principle of charity and prior theories converging as people talk with each other is of limited help.

When we apply Davidson’s approach we run the risk of interpreting people so closely by the lights of our own conceptual scheme that we bracket their otherness. So, for example, Davidson using the principle of charity argues that we should not interpret (or translate) people as believing in true contradictions. However, even within our own linguistic community we have people who believe in true contradictions. Graham Priest’s Dialetheism, is a good example of a theoretical position which accepts the existence of true contradictions. Matte Blanco in his ‘The Unconscious as infinite Sets’ argue that people’s unconscious is governed by a logic that sometimes accepts true contradictions. Blanco and Priest pose a serious difficulty for the Davidson’s principle of charity. An interpreter trying to understand an alien tribe using the principle of charity would end up falsifying the data if they were interpreting an alien version of Priest or Blanco.

A real difficulty with interpreting another is that it is necessary to assume that the person one is interpreting is in most ways similar to you, without making this assumption it is virtually impossible to interpret what the other is saying, however  as we saw above this strategy risks us falsifying the data. William James was sensitive to the problem of incorrectly interpreting others because we assume that their minds have the same structure as ours; he called this The Typical Mind Fallacy. He noted how philosophers like Berkeley and Locke misinterpreted each other because they wrongly assumed that all minds were wrongly structured in the same way as their own. This faulty reasoning lead to disputes between Locke and Berkeley, on the nature of abstract ideas, which could have been resolved, if they had not wrongly assumed that all people have the same capacity for mental Imagery (See Berman, (2008) and Berman and King 2014)

So when trying to correctly interpret the communication of another while it is necessary to try and assume that they are mostly like us we must be aware that this tool can lead us astray, me must recognise difference when it manifests itself. For example, children with Williams-Syndrome typically have a different intuitive folk biology to ordinary children[2]. So if someone were to apply a strict principle of charity to the interpretation of those with William-Syndrome this will impede communication rather than help it. So it is important to note that at the beginning of interpreting another person the possibility of missing essential detail is always possible. Amongst people who do not suffer from profound intellectual disability, people sometimes misinterpret each other, because they wrongly believe that the other has a mind just like ours. So, for example, for years colour blindness was not noticed, variation in people’s ability to form mental images was ignored, the existence of synaesthesia (e.g. people who taste colours) was denied. These differences were ignored despite the fact that people could communicate their ideas to each other through language. Given these facts it seems likely that we are not aware of the mode of experience of a lot of people with profound intellectual disabilities who we care for, most of whom do not even have the ability to communicate their experiences to us through language.

In 1968 Russian Neuroscientist wrote his famous The Mind of a Mnemonist about S. V. Shereshevskii who had and almost unlimited memory as a result of his synaesthesia. Luria’s book was important because he not only discussed the neurological and psychological abilities of Shereshevskii but he also told the story about his subjective experience of the world. Contemporary neuroscientist Oliver Sacks, who was heavily influenced by Luria has followed on in this tradition and over the last 40 years has written a series of case studies describing the subjective experience of people suffering from a variety of neurological disorders. Like Sacks and Luria, Vilayanur S Ramachandran has written some beautiful case studies describing the subjective experience of people with various different neurological disorders. What these case studies have revealed is a variety of subjective experiences which are simply amazing and tragic at the same time. So, for example, there are people called prosopagnosiaics who become face-blind, they lose the ability to recognise even their own wives faces. On the other side of the coin there are people with Capgras delusion who can recognise people’s faces, but because of damage to the connection between the amygdala and the inferotemporal cortex, are no longer capable of attaching the same emotional significance to the faces and develop the delusion that their family and friends are imposters. There are people who acquire types of Amusia which makes them find the sound of music intolerable. Sometimes these disorders and ones like them are acquired (through brain damage or a degenerative neurological disorder), while other times they are congenital. In the case of people with profound intellectual disability who have difficulty communicating; disorders like these can go undiagnosed. Behaviour which seems odd may be explained as the result of some kind of acquired disorder. However arranging an MRI or a CT scan can be difficult as the patient may not be able to sit still when being scanned and there may be difficulties in sedating the patient as a result of other meds they are on. Also sometimes the waiting list for an MRI or CT can take months or typically even a year till they can be provided. In the meantime measures need to be taken to help the person with Intellectual disability can lead a fulfilling life and rich life.

So far we have seen that Davidson’s principle of charity as a heuristic tool is useful to interpreting the behaviour of others however it runs the risk of ignoring radical otherness. In his “Unified Theory of Thought and Action” Davidson employs the two formal tools in interpreting the verbal behaviour of the other he uses Bayes Theorem and Tarski’s Theory of Truth. His formal treatment of interpretation is very useful and important and can be applied to data gathered by carers on the ground.

The carer though will go astray if they stick to strictly to Davidson’s principle of charity. It is important to realise that people whose brains are structured radically different from those of others may not enjoy or want the same things as most other people in society. So, for example, a person with intellectual disabilities who is suffering from certain types of Amusia will obviously not like being brought to concerts etc. We can typically judge whether this person who has no language likes music by attending to their behaviour in certain circumstances. Now obviously observing the environmental conditions which cause unhappy or happy behaviour is a minimal requirement for a carer. However, what is important is not just exploring what environmental conditions lead to what behaviours, but using these facts to construct an accurate cognitive map of the person whose behaviour we are interpreting. This cognitive map can be used to help predict the behaviour of the person being cared for and help them experience the world as they choose to.

Everything I have said here may seem to be blindingly obvious. However, I think the important point I want to make is the need to construct accurate models of the people whose behaviour we are interpreting, and to use these models to help us to construct humanistic case studies of the people we are caring for in the mode of Luria, and Sacks. These stories which we construct with the people we are caring for help them to be understood and in a sense join our world as they are. We need to try and rationally reconstruct how the people with profound intellectual disability we are caring for experience the world. We do this by studying their developmental history and seeing what developmental phases they have passed through, and what phases they not passed through; we need to study their actual behaviour in their daily environment, to use any neurological and psychological data we have to discover how they see the world. With these facts in mind we can use the Bayesian analysis and Tarski’s Theory of Truth to construct a model of the beliefs and desires of the people we care for. We can then translate these models into humanistic stories which we use to guide us as we interact with our clients on a daily basis.

 Part 2 Interpreting the Mind of the Other

Beneath the uniformity that unites us in communication there is a chaotic personal diversity of connections, and, for each of us, the connections continue to evolve. No two of us learn our language alike, nor, in a sense, does any finish learning it while he lives.

(Word and Object, p. 12)

When you meet another and you want to interpret their behaviour it is best to begin externally to try and triangulate on shared objects of experience in the external world. Doing this is difficult, typically we will use the language of the eyes or pointing gestures to help us latch onto or refer to objects in the external world. As we saw in our last section the majority of children from about the age of 12 months will point to things they want their care giver to get for them. Children follow the direction of their parents gaze and use their own gaze to indicate shared objects in their own environment. This is the beginning of a two year process where the child develops shared intentionality. Again not every child goes through goes through this phase; for example some autistic children are severely developmentally delayed and do not develop shared intentionality as quick as their peers, if at all.

One of our key entering wedges to understanding each other is our ability to triangulate on shared objects of experience. Another key is our ability to say yes, and no. People who through motor damage to their vocal cords or who have severe Broca’s Aphasia, and hence cannot speak, usually have the ability to assent or dissent to propositions spoken to them. Such people can use their heads to nod assent or to shake their heads to indicate dissent. People who have locked in syndrome for whatever reason: Motor Neurone Disease, a severe stroke, or some debilitating accident, can be trained to indicate yes and no through blinking of the eyes: once to indicate yes and twice to indicate no. Trying to communicate with people with Wernicke’s Aphasia is much more difficult and is best achieved by focusing on objects in the external world and using gestures as much as possible.

With people who have specific diagnoses such as Wernicke’s Aphasia, Broca’s Aphasia, Autism, or an inability to speak because of motor difficulties: e.g. severe Cerebral Palsy, speech and language therapists will have set programmes set which have worked (as well as can be expected) in the past. However you don’t always have a clear diagnosis to work with and sometimes you have multiple diagnoses, also the person you are working with personal idiosyncratic experiences, only a tiny portion of which may be captured in care plans and medical notes. So here it is up to the carer who is with the client on a daily basis to try and rationally reconstruct the inner world of the client so a high quality of life can be maintained.

However this task should not be underestimated, we have seen that typical wedges into language may not be available. People may find it difficult to triangulate on a shared object of experience, they may not automatically track eye direction, and if their understanding of language is severely impaired or non-existent then we may not even be able to get the subject to assent of dissent to propositions verbally. However this should not be taken to be an object of despair, we can still track the subjects interests and desires by careful study of what they tend to orientate towards and away from. Though we may need to facilitate this process by providing new opportunities for the client on a daily basis to try and ascertain what aspects of reality they like and dislike.

However, even when we do not face the above problems, interpretation is not as easy as it may seem. We will still face the problem of Underdetermination (henceforth UD). UD is a phenomenon which philosophers have been working on for years. The basic idea of UD is that our total observational data is consistent with more than one theory. Underdetermination has been discussed in detail by philosophers like Quine, Wittgenstein, and Goodman. Linguists like Chomsky have argued that the Underdetermination that faces a child as they learn their first language can only be overcome by postulating innate apparatus.

Most contemporary cognitive scientists agree with Chomsky on this point. I will now cite three of these thinkers to give you a flavour of what how they understand UD. In his book The Language Instinct, Stephen Pinker discusses UD:

A rabbit scurries by, and a native shouts, ‘Gavagai’ What does gavagai mean? Logically speaking, it needn’t be ‘Rabbit’. It could refer to that particular rabbit (Flopsy, for example). It could mean any furry thing, any mammal, or any member of that species of rabbit (say, Oryctolagus), or any other member of that variety of that species (say, chinchilla rabbit). It could mean scurrying rabbit, scurrying thing, rabbit plus the ground it scurries upon, or scurrying in general. It could mean footprint-maker, or habitat for rabbit fleas. It could mean the top half of a rabbit, or rabbit meat on the hoof, or possessor of at least one rabbit’s foot. It could mean anything that is either a rabbit or a Buick. It could mean collection of undetached rabbit parts, or ‘Lo Rabbit hood again, or ‘It rabbiteth’, analogous to ‘it raineth’. The problem is the same when the child is the linguist and the parents are the natives…Figuring out which word to attach to which concept is the Gavagai problem, and if infants start out with concepts corresponding to the kind of meanings that languages use, the problem is partly solved. (1994, 156)

Pinker then goes on to cite the research of the psychologist Ellen Markman to show that it is indeed the case that children are born with concepts which constrain the types of meanings that words can be given. It is Pinker’s view that Quine’s problem is a problem of UD which is solved by the fact that the concepts that children are born with will place innate constraints on the type of meanings that children can attach to words.

The child psychologist Paul Bloom draws a similar conclusion to Pinker:

These problems of reference and generalization are solved so easily by children and adults that it takes philosophers like Quine and Goodman to even notice that they exist. If we see someone point to a rabbit and say ‘gavagai’, it is entirely natural to assume that this is an act of naming and that the word refers to the rabbit and should be extended to other rabbits. It would be mad to think that the word refers to undetached rabbit parts or rabbits plus the Eiffel Tower. But the naturalness of the rabbit hypothesis and the madness of the alternatives is not a logical necessity; it is instead the result of how the human mind works. (2000, 5)[3].

 

The Harvard linguist Cedric Boeckx echoes the claims of the above thinkers:

Yet, if you think about it, the Gavagai situation is the one we all faced as children trying to acquire the meaning of words. How did we guess that elephant refers to that big grey animal with a long trunk? Because someone pointed to the animal and said elephant? But how did you know what exactly was being pointed at? Surely the finger couldn’t point at the whole elephant; it was your cognitive bias that interpreted the act of pointing in that way. (2010, 41)

There is a lot of debate in contemporary cognitive science as to whether Pinker et al. are justified in postulating innate concepts to overcome UD. However, when it comes to dealing with people who have profound intellectual disabilities, and have not learned to speak, we really need to be careful not to impose our own ontology onto then. Their way of interpreting the world might be radically different from ours, we cannot be sure that they share the same innate apparatus as us, so we must proceed with caution.

                                                Part 3: The Web of Belief

The approach to the problems of meaning, belief, and desire that I have outlined is not, I am sure it is clear, meant to throw any light on how in real life we come to understand each other, much less how we understand our first concepts and our first language. I have been engaged in a conceptual exercise aimed at revealing the dependencies among our basic propositional attitudes at a level fundamental  enough to avoid the assumption that we can come to grasp them-or intelligibly attribute them to others-one at a time. My way of performing this exercise has been to show how it is in principle possible to arrive at all of them at once.  (Problems of Rationality, p. 166)

When using Davidson’s views on the theory of interpretation we need to be careful not to confuse Davidson’s project with the one I am interested in. My project is primarily a pragmatic one; I am concerned coming up with basic principles which are useful in helping to interpret the behaviour of people with profound Intellectual Disability. Davidson, on the other hand, is concerned with constructing a theory of meaning, and interpretation. Because of our different purposes I use Davidson’s project as needed and dismiss what is not useful to my project. This is no criticism of Davidson’s project as he intended his theories to deal with entirely different problems.

One proviso I should make at the outset is that Davidson limits himself to verbal communication alone. This is because of his belief that we are not justified in attributing propositional attitudes to non-linguistic animals. I will not be following Davidson in this. There is ample evidence from ethno-science that animals without complex language can think using propositional-attitudes.

In his 1987 “A Nice Derangement of Epitaphs” Davidson makes the following claim:

We may say that linguistic ability is the ability to converge on a passing theory from time to time-this is what I have suggested and I have no better proposal. But if we do say this, then we should realise that we have abandoned not only the ordinary notion of a language, but we have erased the boundary between  knowing a language and knowing our way around in the world generally. A passing theory is really like a theory at least in this, that it is derived by wit, luck, and wisdom from a private vocabulary and grammar, knowledge of the ways people get their point across, and rules of thumb for figuring out what deviations from the dictionary are most likely. There is no more chance of regularising or teaching, this process than there is of regularising or teaching the process of creating new theories to cope with the new data in any field-for that is what this process involves ( A Nice Derangement of Epitaphs p. 107)

I think that he is correct on this point, and I think it contradicts a lot of his other claims about the necessity of language in order for a creature to have propositional attitudes. On the Davidsonian picture in his “A Nice Derangement of Epitaphs” having a language isn’t a black and white issue, where we can say a person has a language can be considered a sliding scale. I think this is accurate. Do we say a person with Broca’s aphasia has language, or does a person with Wernicke’s aphasia have a language? To a certain degree we can answer yes to the preceding questions and to a certain degree no. Everyone agrees that a deaf person who uses sign language has a language, but what about a person who can use gestures to communicate their needs but who has not mastered the complex sign language that most of their peers do. Again we can answer yes and no to these questions.

Consider Lamh a type of sign language used by children or people with intellectual disabilities. This sign language uses a few basic symbols some of which can be combined with each other; these symbols help the children communicate their basic needs to each other. Now in an obvious sense Lamh is not a language. It lacks the recursive ability to combine a finite amount of symbols into an infinite series; it lacks the complex and grammars typical of the world’s languages. The words used in the language lack the sometimes complex ambiguous context sensitive features of typical language, they lack some of the productive morphological features we find in ordinary language words, and the words are not really compositional. So there is a clear sense in which we can argue that Lamh is not a language. Lamh clearly isn’t an internal generative procedure which we use to link sound and meaning.  So it isn’t a language in the sense that Chomsky uses the term.

Lamh is on the other hand an attempt to communicate using signals. A person using Lamh is using the signal with the intention that the person they are communicating with shares the same meaning of the term that they do, and hence will understand them. A person with profound intellectual disability who cannot use Lamh can still try to communicate by indicating objects in the world and saying yes and no if people interpret them correctly or incorrectly. Likewise, a person who cannot move their bodies but who can understand what others are saying to them can assent and dissent to what is being said to them has a language. So, my more expansive use of language is more intuitive than Chomskies, in that it captures all intentional attempts at communication with others. My definition is not really at odds with Chomsky, people can define the word ‘language’ as they please, using language in my expansive way makes sense for my project because it nicely captures the various different communicative abilities of people with different forms of aphasia[4].

So I think that given Davidson’s (1987) definition of language, his statement that only of creatures with a language can we sensibly attribute propositional attitudes, becomes less controversial. Anybody that is engaging with the world and others, using intentional signals, in however limited a way, has a language on this Davidsonian model[5].

With the preliminaries out of the way we can now begin to outline the model for interpreting people with profound intellectual disabilities. Davidson notes correctly that when we are trying to understand linguistic utterances we need the locutions of intentional action, belief, desire, meaning, hopes, wishes, attempts etc.  These intentional locutions are the background conditions which are needed in order for speech to occur. For Davidson, meaning, belief, and desire are three interlocking terms which cannot be reduced to the other. I think that Davidson is correct on this general point. When we try to interpret the behaviour and communicative intent of people speaking to us we must help ourselves to these interlocked concepts.

Davidson grounds his Unified, Theory of Thought, Meaning and Action, on Ramsey’s decision theory. Ramsey’s Bayesian Decision Theory relies on two basic intensional notions: belief and preference.  This Ramsey picture tries to make sense of behaviour by assuming that people being interpreted are rational agents who act in their own best interests. We assume that a person attaches a value to a particular course of action, which is calculated by the outcome that will be achieved by performing an action, and the likelihood that the action will achieve the desired outcome. We do not have to make the assumption that the subject is consciously working out this formula, we only need to assume that the brain is doing the relevant calculations. Ramsey’s Bayesian Decision Theory typically operates in the form of getting subjects to gamble on certain outcomes and judging their beliefs and preferences based on how they choose. This method is a bit complex for the task of interpreting the behaviour of people with intellectual disabilities. So I suggest instead of using gambling we can use the simpler Davidsonian method of testing for assent and dissent to various propositions spoken about the external world.

Davidson correctly notes that “A sentence held true plus interpretation is equal to a belief” (ibid p. 156). So if a rabbit passes by and I say (1) ‘rabbit’ and my client assents, we can conclude that he believes that ‘(1)’ is true. Now to say that a person holds (1) True, means that they believe that (1) is true. If they believe that (1) is true then it follows that they understand the meaning of (1). So, here we can see how we can get meaning and belief out of someone assenting to a sentence. However, if I spent the day with a client questioning him, and determined a list of sentences which he held true, and which sentences he held false, I would have a lot of data but very little theory to explain the data. I would be able to derive meaning and belief from evidence of what environmental factors cause a client to hold a sentence true. Obviously this process is still subject to the massive Underdetermination that we discussed earlier. But we have little choice but to impose our ontology onto the child at the beginning as we begin to interpret what sentences they are assenting to. So, for example, we must assume that they assent ‘Rabbit’ as in ‘a particular rabbit’ as opposed to rabbit as in ‘undetached rabbit part’, ‘particular instance of universal rabbit hood’ etc. As we move into assessing how the various sentences they assent to connect to each other we will remove some indeterminacy. However, at the end of any process of interpretation some indeterminacy will remain.

As we saw above our process of interpretation will involve some degree of imposing our rationality unto the people we are interpreting. So should not interpret people as accepting contradictions, or denying the law of identity etc. Again there are instances where people do accept contradictions, for example, in dreams people report accepting contradictions and breaking the law of identity. The psychologist and logician Matte Blanco takes this to indicate at a deep unconscious level we follow different logical laws. However, outside, of these esoteric examples people typically follow the same logical laws. Anyway we have little choice when we begin to interpret our clients but to accept that they are using the same logical laws as us.

It is best to think of our process of interpreting our clients as uncovering what Quine calls their web of belief. The person we are interpreting has an interconnected set of sentence like beliefs which are related to each other in terms of logic entailment. Sentences which are at the interior of the web are our deepest held truths which we are most wedded to, while sentences at the periphery are our more speculative beliefs. Since these sentences are all indirectly connected to each other, changing the truth value of any sentence will have consequences for the rest the web of belief. We assume that the person holds a web of belief which is somewhat like ours at the core, in terms of ordinary enduring middle sized objects (trees, mountains) and logical laws obeyed.  The core of the web of belief can equated with what Davidson calls a prior theory. Davidson correctly notes that when interpreting the behaviour of another we might not be able to make these principles definite but we can at least form heuristic rules of thumb.

When trying to explicate another person’s web of belief, an important piece of evidence to be discovered, is the degree to which a subject will believe that one sentence supports another.  So, for example, suppose a subject accepts that sentence (1) The cat is on the mat. Is true. Then an important step is to ask the subject to what degree they think (2) The mat is under the cat. Supports (1) Or to what degree does (3) The cat likes sitting on the mat. Support (1). By understanding how much a subject thinks a series of sentences support or do not support another sentence we can assign numbers and understand the degree to which a person believes the original sentence to be true. In the next blog I will demonstrate how to calculate degrees to which people think one sentence supports another and how this can be used to help in interpret people with profound intellectual disabilities.

 


[1][1] My summary of the developmental stages children go through during the first 9 years of their life is taken from Tomasello “Constructing a Language” (2003), and Paul Bloom “How Children Learn the Meaning of Words 2002)

[2] See Johnston and Carey (1998) Knowledge and Conceptual Change in Folk-Biology: Evidence from Williams Syndrome.

[3] While Pinker’s and Bloom’s arguments on this narrow point are similar, obviously this does not imply that they hold the exact same view on the nature of the mind.

[4] For a detailed discussion of how Chomsky defines ‘language’ see his Knowledge of Language (1986).

[5] Again Davidson probably wouldn’t agree with me on this point. However, it is not my purpose to defend all aspects of Davidson’s philosophy.