Symington on Psychoanalysis and Intellectual Disability

A BRIEF DISCUSSION OF NEVILLE SYMINGTON’S: THE PSYCHOTHERAPY OF A MENTALLY HANDICAPPED PATIENT

I have recently been reading Neville Symington’s (2007) book ‘Becoming a Person’ which is a collection of papers describing his experiences as a psychoanalyst. In this interesting book he outlines how his own experiences undergoing psychoanalysis as well as his experiences as an analyst to people with psychosis, people who are psychopaths and people who are intellectually disabled. The book is a collection of papers written over a period of thirty years which outline how he has developed as a person through different aspects of the analytic experience.

Chapter 8 of the book is a discussion of his experience when acting as a therapist of an adult age 33 who had an intellectual disability. His clients name was not given in order to protect his privacy so a pseudonym was given: Harry Smith. Harry had an IQ of 59. Conventional psychoanalytic wisdom argues that only people who have and IQ at 120 or above can benefit from psychotherapy. So Symington was going against conventional wisdom when taking on Harry.

After reading the case study I decided to write a short blog discussing it. Before proceeding I should note I am not a therapist of any bent. So this blog is the opinion of an interested layman as opposed to a psychiatrist, a psychoanalyst, psychologist or therapist of any kind. I did study psychoanalysis as an undergraduate as part of my philosophy degree. Furthermore my PhD centred on the philosophy of science, mind and language. So I am very familiar with the evidential bases used in all of the main therapies. A few years back I underwent psychoanalysis for a year and a half. So I am familiar with the experience of psychoanalysis from both a theoretical point of view, and from the point of view of an analysand. I have also worked as a care assistant with people who have intellectual disabilities for over ten years. So I am aware from a practical perspective of some of the emotional difficulties which face people with intellectual disabilities throughout their lives. For these reasons I decided to give my perspective on Symington’s discussion of psychotherapy for people with intellectual disabilities.

I found his paper both very impressive and very disappointing at the same time. What I found impressive was his wisdom in helping Harry in the therapeutic setting; however I found his theoretical discussion seriously defective. I will begin with the positive and discus what impressed me about the paper.

According to Symington Harry came from a working class background, both of his parents were hard workers and well presented though the mother was a bit more dishevelled. The father, mother and Harry all lived together in a small flat. However the parents lived separate lives, the father lived in his own room which had a television, armchair, bed and all his own possessions with him. The mother and Harry had their own rooms but spent all of their time together. The father thought that Harry was much smarter than people gave him credit for and wanted Harry to get a full time and paid job. While the mother claimed that she just wanted Harry to be happy. Harry’s father was a former alcoholic and though he had not drank for years his mother took a dim view of the father and Harry was aware of this.

When Harry began his therapy he was obsessed with the question of his intelligence level and anxious about the conflicting opinions on his abilities that his family and careers had of him. He talked incessantly in his early sessions and found it difficult to concentrate on what his therapist said to him. His primary aim was to get his therapist to give an assessment of his abilities which he could file alongside the various different assessments he had already received. Symington instead of offering his own assessment listened to Harry and tried to get Harry to discover his own abilities.

As therapy went on it became obvious that (1) Harry held his mother as an idealised figure who he could not think negatively about. (2) His ego was almost merged with her ego (3) Harry found it difficult to separate fantasy and jokes from reality. Symington didn’t tell Harry these facts but helped him to work to them out himself. After Harry left the therapy Symington discovered from the social worker the following facts: (1) Harry was now unwilling to do any work task at the centre. (2) He stayed at home, but reported to his social worker from time to time and continued to wonder whether he should continue treatment again. (3) It was confirmed that Harry was able to conduct a conversation that would have been quite impossible before the therapy began. He was less pressured by inner anxiety. (4) He was more decisive and had a greater sense of autonomy in relation to life’s choices. (Becoming a Person, p. 137)

Overall then the therapeutic intervention was very successful, in that it helped Harry achieve a greater sense of self understanding and autonomy. Obviously from a single case study it is hard to draw any large scale generalisations. It seems that Symington’s practical wisdom in realising that offering Harry another interpretation to place alongside all of the other ones was vital in the therapeutic success. Likewise Symington’s ability to recognise the unconscious idealisation of the mother and partial inability of Harry to distinguish fantasy from reality were vital. However more importantly Symington was listening to Harry’s stories and helped Harry arrive at the understanding of his situation himself, it was this that gave Harry his greater autonomy. So Symington’s success was a combination of theoretical insight and practical wisdom.

Despite the success of Symington’s analysis of Harry there were aspects of his paper that I was less than happy with. In particular I disagreed with his theoretical explication of therapy for mentally handicapped patients. He explicated developmental delays in people with intellectual disabilities in the following way:

“In the subnormal patient there is an active force blocking the development of intelligence. This means that intelligence presents itself as a threat to the organism.” (ibid, p. 141)

“Now, if there is a regression to the foetal stage in development as Pierce Clark suggests, then to move in a new relation to mother as a result of birth is something powerfully resisted. The hatred of reality and new objects necessarily implies a violent detestation of intelligence. The patient desires to stay in a fixed relation to mother and does not want the emergence of intelligence” (ibid, p. 142)

The above characterisation of intellectual disability is to put it bluntly utter nonsense. Firstly people who have intellectual disabilities do so for a variety of different reasons. If one considers Genetic Disorders like Down syndrome, Fragile X Syndrome, Angelman Syndrome, and Williams Syndrome one can see the absurdity of Symington’s characterisation of the cause of low intelligence in intellectual disability. We understand the biochemical reasons for lower intelligence in all of these disorders to an incredibly detailed degree.  The same is true of Neurodevelopmental disorders which occur as a result of birth trauma e.g. and umbilical cord getting wrapped around the babies neck. If this results in damage to the motor control or general intelligence we can trace it to a lack of oxygen in particular areas in the brain. There is simply no need for Symington’s psychological hypothesis in the case of intellectual disability.

Probably the reason Symington was driven to his hypothesis was to explain the change that occurred in his patient which would not necessarily be predicted based on the idea of fixed brain damage. However we are learning that the brain is much more flexible and plastic than it has previously been considered (though it is not near as plastic as some hype would lead one to believe). So Symington’s success can probably be explained in these terms.

In recent times neuropsychoanalyst Marc Solms has been treating patients who have brain damage. Solms is an expert on brain function as well as the human element of psychotherapeutic treatment. Symington showed a real intuitive ability to help Harry with his treatment. It is imperative though that analysts offering such treatment follow Solms example and acquaint themselves with the relevant neuroscientific data to supplement their treatment.

Linguaform Philosophy of Mind: Part 3

In my last blog I considered Quine’s claim that it is necessary for a child to grasp the syntax of quantification before we will be justified in attributing an ontology to him. I showed that the experimental work of Soja et al. has refuted this claim.  In this blog I will consider the claim that we must make having a language as the criterion of being able think, because if we use the more fundamental criterion of saying that creatures who can reliably distinguish aspects of their environment can think, we will end up attributing thought to thermometers etc. This argument is supported by amongst others Brandom, and Davidson. I will argue here that the argument is not compelling as it relies on the untenable notion of intrinsic intentionality. I will focus my criticisms to Brandom and Davidson who are the best contemporary philosophers who argue against classification as evidence of thought.

The bogey that Davidson and Brandom are worried about is that by applying the criterion of reliable discriminative capacities as the mark of a thinker we are casting our net too wide. McDowell sums up Brandom and Davidson’s fear as follows:

“On this view, the differential responsiveness to danger manifested in an animal’s capacity for fleeing would not license crediting the animal with being aware of danger, any more than, say, iron’s differential responsiveness to moisture in its surroundings-a disposition to rust if there is moisture and not to rust if there is not-licenses supposing bits of iron can be aware of the presence of moisture” (McDowell: Conceptual Capacities in Perception, p. 133)

This fear of not being able to distinguish between legitimate intentional ascriptions and illegitimate ones is what motivates Davidson and Brandom. This fear is misplaced. From a scientific perspective it is legitimate to use an intentional locution if it gives us predictive control of phenomena that is otherwise not available. In the case of rust we have chemical explanations which can predict and control the phenomena in manners which go far beyond intentional explanations. However there is nothing wrong with using intentional explanations as a stop gap. In the case of humans we use intentional explanations in a lot of cases because of the practical difficulties of explaining human behaviour in terms of the design stance or the physical stance. Now it is an empirical question whether future science will eventually eradicate the use for intentional level explanations for humans. However at this stage of our scientific development it is clear that adopting the intentional stance towards humans is still a very fruitful approach. This is the pragmatic approach I adopt and one can see that it is heavily influenced by Dennett. I think that this approach solves a lot of the problems which face Davidson and Brandom. However before trying to justify this pragmatic approach I will first outline why Brandom and Davidson do not adopt it.

In his excellent 1994 book ‘Making It Explicit’ Bob Brandom outlined his theory of how thought emerges from implicit social practices which hold a normative force for the society that uses the practices. Brandom notes that Dennett’s Intentional Stance approach presupposes that intentionality is conferred on creatures by more capable intentional systems. Such intentional ascription involves concept using creatures operating in the “space of reasons” constructing theories to explain the behaviour of entities in their world. Brandom argues that the implicit social practices that govern our linguistic behaviour is where intrinsic intentionality lies. On his view Dennett’s adoption of the intentional stance relies on the notion of intrinsic intentionality.

Davidson also adopts more realistic approach to intentional ascription than Dennett does. In his 1990 ‘Real Patterns’  Dennett diagnoses the main difference between him and Davidson on this topic as centring on their different interpretation of Quine’s indeterminacy of translation argument. When speaking of the competing intentional interpretations Dennett makes the following point:

“When one wins and the other loses, it will look to the myopic observer as if one “theory” has scored a serious point against the other, but when one recognizes the possibility that both may chalk up victories, which permits either one to improve his theory by making adjustments, one sees that local triumphs may be insufficient to provide any ground in reality for declaring one account a closer approximation to the truth.

Now, some might say that this situation is always unstable; eventually one interpretation is bound to ramify better to new cases, or be deducible from some larger scheme covering other data, etc. That might be true in many cases; but- and this I think is the central point of Quine’s indeterminacy thesis-it need not be true of all” (Real Patterns: p. 304)

Dennett takes the indeterminacy of translation argument as showing that we can have two rival interpretation schemes which are both reliable predictors over the long run but which disagree in crucial respects. He thinks that his interpretation of the indeterminacy of translation has consequences for the degree of realism that is appropriate for one to adopt towards propositional attitude explanations:

 “How does this make me less of a realist than Davidson? I see that there could be two different systems of belief attribution to an individual which differed substantially in what they attributed- even in yielding substantially different predictions of the individual’s future behaviour-and yet where no deeper fact of the matter could establish that one was a description of the individual’s real beliefs and the other not. In other words, there could be two different, but equally real, patterns discernible in the noisy world…the choice of a pattern would indeed be up to the observer, a matter to be decided on idiosyncratic pragmatic grounds” (ibid, p. 305)

Davidson’s view of the indeterminacy of translation is less radical than Dennett’s version. When discussing the indeterminacy of translation Davidson explains it by invoking the analogy two different ways of measuring temperature:

When we use numbers to keep track of relations among weights and lengths and temperatures, we are not apt to respond that different sets of numbers do as well as others in keeping track of all that is relevant empirically by complaining that weights or lengths or temperatures are not ‘real’. We know that there is no contradiction between saying that the temperature of the air is 32 degrees Fahrenheit and saying that it is 0 degrees Celsius: there is nothing in this ‘relativism’ to show that the properties being measured are not real.” (Davidson: What Is Present to the Mind)

This is a much less radical view than Dennett’s view. The different measurement schemes can be translated into each other quiet easily. What Dennett is talking about is a situation where two interpretations of a person’s behaviour occur which are radically different and lead to different predictions of the agents behaviour. He is talking about a situation where there is not fact of the matter as to which interpretation is the correct one because a failed prediction does not falsify an entire interpretation scheme.

Since intentional stance ascriptions can give us divergent interpretations which can lead to different predictions and there is no fact of the matter as to which is the correct one Dennett argues that there is little sense in being a realist about such ascriptions. He thinks that in one sense we can argue that intentional ascriptions pick out real patterns however there will be divergent real patterns in the environment which can be picked out by agents doing the interpreting and there will be no fact of the matter as to what the correct interpretation is.

Davidson does not think that Dennett has given us good reason for adopting his instrumentalist approach to intentional ascription. As he sees it Dennett has confused two issues (1) Whether Propositional Attitudes are entities, (2) Whether there is a correct answer to the question of whether someone has the certain propositional attitude (Indeterminism and Anti-Realism, p. 82). To question (1) Davidson answers no. He notes that we don’t need to postulate beliefs as entities in order for them to be true of a person, anymore than we need to postulate an entity ‘Weight’ that a person has in order for it to be true that a person weighs a certain amount. On this point Dennett and Davidson are largely in agreement with each other.

On point 2 there are substantial disagreements between the two thinkers. Davidson doesn’t agree with Dennett’s claim that there is not a correct answer to the question of whether a certain propositional attitude is true of a person. Davidson makes the point that it is only when one is within language that truth and error come fully onto the scene. On Davidson’s conception language necessarily involves a self, (at least) another person and a shared object of experience which they are making claims about. For Davidson, the concept of objectivity is constituted by these social reactions to shared objects in the external environment.

If, for example, I am watching a person looking at the movements of an animal and react in a similar way to the way I would, and likewise he is looking at my reactions to the animal, we are judging us as creatures with similar orientations to the world. We can make judgements about what we think he SHOULD do in certain circumstances, for example, hide when he sees lion. Our concepts of right and wrong will co-evolve as we make interpret each other as acting or making claims about the world that we judge to be inconsistent with what the other believes about the world. So if we believe that the other thinks lions are dangerous, and the other doesn’t hide when a lion is present we might judge that the person WRONGLY thinks that the lion cannot see him. So, for Davidson, since our concepts of right and wrong, which rely on the notion of objective truth and falsity; are so intimately connected with our propositional attitude ascriptions, he argues that we must conclude that if anything is real our propositional attitude ascriptions are.   This is because the very notion of objectivity is parasitic on the intentional ascriptions.

Both Davidson and Brandom argue in similar ways that Dennett’s intentional stance approach does not work because we have objective grounds for attributing intentionality to members of our linguistic community. So if we do adopt Dennett’s intentional stance approach to non-linguistic creatures as a sort of pragmatic measure then we need recognise that these creatures unlike us do not have real intentionality.

Overall I am not very impressed with Davidson or Brandom’s argument for intrinsic intentionality. Both of them argue that intentional ascription is necessarily a social intersubjective phenomenon, and that intentional ascription to non-linguistic creatures is reliant on language using theorisers.

I agree that as we evolve our sense of right and wrong, true and false it is probably the case that we need to do so in a social environment with other creatures who we take to be believers. However I don’t think that this fact is really any reason to be a realist in the sense Davidson is about belief/desire psychology. Davidson takes the fact that intersubjective linguistic judgements are necessary to develop our sense of right and wrong to argue that if anything is objectively real it is our propositional attitudes. I see no reason to draw this conclusion. The fact that folk psychology may have played a role in our developing our sense of right and wrong does not vindicate folk psychology, the fact is that while folk psychology may play a key role in our development does not mean it is real in Davidson’s sense. If as we learn more and more about neuroscience we develop more accurate ways of predicatively understanding and controlling the behaviour of people then folk psychology will have been supplanted as theory. Until such time (if it ever comes) we are justified in using the intentional stance as a stop gap measure of explaining and predicting the behaviour of linguistic and non-linguistic agents in the world. Nothing Davidson or Brandom have said really cast any doubt on the pragmatic approach I am recommending here. Given this fact I think that their assertion that non-linguistic creatures do not have propositional attitudes is unmotivated. Better to simply use the pragmatic approach that it is useful to conceive non-linguistic creatures as well as linguistic creatures as intentional agents, and that this approach is justified on explanatory and predictive grounds.

In my next blog I will consider Davidson’s argument that the logic of belief/desire ascriptions, their referentially opacity which does not apply to non-linguistic creatures, shows that they do not think. I will demonstrate that this argument does not work. In my final blog on this topic I will critique Dennett’s linguaform conception of consciousness and show why it does not work.

Chomsky, Russell and Platonic Propositions

CHOMSKY, RUSSELL AND PLATONIC PROPOSITIONS

. Chomsky’s claim that humans are born with an innate mental lexicon which as a matter of empirical fact yields  analytic connections seems to commit him to the existence of propositions. In this blog I will evaluate the claim that Chomsky is committed to the existence of propositions. I will argue that though his linguistics theory does not commit him to the existence of Platonic propositions; he does nonetheless endorse the existence of a certain type of proposition which Quine would find objectionable.

I will begin by first outlining the notion of a Platonic proposition which was endorsed at the turn of the century by Bertrand Russell. Russell first conceived propositions as the non-linguistic entity that a sentence is about, so for example if a person says in English that ‘The Twin Towers were Bombed on Sept 11th 2001’, and another person says the same thing in French, they are both asserting the same true proposition. According to the early Russell a proposition is a non-linguistic entity that sentences are about. Further examples will make the point clear, if on September the 10th 2001 Osama Bin Laden said ‘The twin towers will be bombed on September the 11th 2001’ and if I say today ‘The twin towers were bombed on September the 11th 2001’ we are taking different attitudes towards the same proposition. Propositions like sentences have constituents which are related to each other in certain ways. According to Russell circa 1903 the constituents of Propositions do not designate the constituents of states of affairs, rather they encompass them. The proposition ‘Socrates is in the field’ contains ‘Socrates’ the relation ‘is in’ and ‘the field’. It is important to note that the objects that enter into a proposition are neither true nor false; it is only the proposition which is true or false. According to the early Russell in order for a person to grasp a proposition we need to be acquainted with its constituents.

The two conditions which Russell put on something being a proposition are that (1) The structure of the sentence must mirror the structure of the proposition and (2) We must be acquainted with each of the constituents of the propositions. There are obvious difficulties with condition number (2), consider propositions such as ‘all men are mortal’, or ‘all prime numbers have a successor’ we can grasp the truth of these propositions; however we are not acquainted with ‘all men’ or ‘all prime numbers’. To meet this difficulty Russell postulated entities called Denoting Concepts, we could be acquainted with these concepts, and they could denote entities which fell beyond our ken. The price which Russell had to pay for the postulation of Denoting Concepts was that, it complicated his theory of Propositions which now consisted of entities we are acquainted with (objects) and Denoting Concepts which we are also acquainted with and which refer to the entities we were not directly acquainted with, such ‘all men’ ‘infinite prime number’ etc. As is well known in Russell’s famous Grey’s Elegy argument he showed that his postulation of Denoting Concepts lead to more trouble than it was worth. Having cleaned this mess up Russell was able to show that we are not acquainted with Denoting Concepts rather a proposition was a complex of objects arranged in a certain relation. He showed that the grammatical form of a sentence was misleading us and hiding the logical form of the sentence. Consider the sentence ‘Socrates is Mortal’ from a grammatical point of view this sentence would seem to have two constituents; the subject ‘Socrates’, and the predicate ‘is Mortal’, however on Russell’s analysis it consists of an existential quantifier, and a function (name) which satisfies the unsaturated predicate. This method of analysis has the advantage that it helps us assign a truth or falsehood to sentences with names that don’t refer by analysing them as indefinite descriptions. So for Russell the logical form of a sentence represents the unambiguous proposition which the sentence is ambiguously trying to assert. This picture gives philosophy the job of discovering which sentences are true propositions and which are not; prompting the early Wittgenstein to state ‘all philosophy is a critique of language’.

Now Quine famously claimed that one could benefit from Russell’s logic, and, explain truth and falsehood (along Tarski’s lines) without admitting things such as propositions into our ontology. His indeterminacy of translation argument was explicitly designed to show that propositions are non explanatory and have poor identity conditions and hence should not be admitted into our ontology.

When analysing whether Chomsky is committed to the existence of propositions it is important to note that Chomsky’s primary concern is not with the epistemological issues which interested Russell. Furthermore he is not interested in mapping word-world relations, language can only fruitfully be understood if a certain amount of idealisation occurs, and the domain of research is limited. For example if a person is interested in studying the structure of the foot, they will not exhaustively try to map every piece of interaction between the foot and the world, rather they will treat the foot as a biological aspect of the person and proceed to investigate its internal structure. Now for Chomsky language is just a biological aspect of the human species like any other, and hence should be studied in the same manner as every other aspect of the human species. Now in order to study language we need to determine what the main features of the language are, and to then construct an abstract model to explain how those features are possible. The central features of language are first the fact that it is compositional; the meaning of the sentence is determined by the meaning of its constituents and the rules used to combine them. A language also involves the ability to express an infinite amount of utterances using finite means. Humans born into any normal human environment will begin to speak at certain predictable times, and require very little explicit instruction to do so. The converse situation exists with other primates; who even with explicit teaching can learn only extremely rudimentary linguistic abilities. A further fact about language which needs to be explained is the fact that humans brought up in different environments seem to speak different languages. These are the basic facts of language that Chomsky sets out to explain in his theory of generative grammar, and Chomsky nowhere claims that Platonic Propositions are helpful in addressing these questions; in fact he has in numerous different places criticised people such as Katz and Postal for holding that language is an abstract object.

Before considering Chomsky’s position I will first consider a possible way that Platonic propositions can be used to explain the above four facts about language. The compositionality requirement could be explained by the fact that the grammatical form of sentences mirrors the logical form of propositions. So consider the sentence ‘Socrates is bald’ this sentence could mirror a proposition which contained the Particular Socrates, and two Universals the relation ‘is’ and the attribute ‘bald’. Or the sentence could consist of the particular the subject Socrates, and the Universal the predicate ‘is bald’. The two seemingly contradictory facts that humans seem to be able to speak without explicit instruction, and yet when born in different environments they speak different languages can also be handled by postulating propositions. When an English speaker says ‘Socrates is bald’, and a French man speaks its equivalent they are both speaking the same proposition but using different sounds to do so. And the Platonic doctrine that we are born with innate knowledge could be used to explain why we do not need explicit instruction to speak natural languages, humans it could be claimed are born with rational intuition and hence learn how to discover propositions in the world and represent them using arbitrary symbols. All that now needs to be explained is our ability to use finite means to express infinite ideas, now this ability is the real sticking point, when Russell wanted to explain our ability to grasp propositions such as ‘all prime numbers have a successor’ he was lead to postulating ‘Denoting Concepts’ which we were acquainted with, and which denoted the said numbers. So instead of being acquainted with an infinite amount of propositions about prime numbers we are acquainted with the Denoting Concepts. An obvious difficulty with the above explanation is the fact that if one accepts it, one can no longer accept the above view that grammatical sentences mirror the logical structure of propositions. One is left with logical form which is hidden by the grammatical form of a sentence. This being the case one no longer has an explanation of the compositionality of sentences.

We need to explain why sentences are structured the way that they are, if they don’t mirror the logical form of propositions one could claim that we adopt the structure we do because they are arbitrary conventions, and that the conventions as a matter of contingent fact do not mirror the logical structure of propositions. However, if one accepts this conventionalist view one is committed to the view that language is learned, and one must then be able to sketch a plausible learning theory account of language acquisition. The above discussion indicates that Platonic propositions as explanatory tools cannot explain all of the important facts about language, though they do seem to offer some kind of explanation of at least some of them. Furthermore as the later Wittgenstein recognized not all linguistic usage involves expressing propositions, he used the metaphor of language being a tool rather than a picture of reality, and for the most part he was correct. The fact is that linguistic usage is in inherently novel, and is not constrained to merely the of picturing facts. So much more than Platonic propositions are needed to explain linguistic ability.

Chomsky’s explanation of four the core features of language involve a shift towards internalistic explanation rather than the externalistic explanations which are common in philosophy. Firstly he agreed with Plato that to explain our linguistic ability one needs to postulate that the person comes to the learning situation with innate apparatus. So if people are all born with the same innate ability to speak language it follows that a person born in France and a person born in England could both be saying the exact same thing but sounding different not because they both grasp the same proposition which they use different sounds to express, rather they are both speaking the same sentence, which is structured differently because of parametric variations which are fixed depending on the type of data one is exposed to in ones childhood environment. The fact that we can express infinite ideas using finite means can be explained by making the rules which underlie this recursion explicit. Likewise the fact of compositionality can be explained in terms of innate features of the lexicon.

Chomsky argues for a language faculty with his poverty of stimulus argument I have criticised this argument in an earlier blog and will not repeat the material here. I will now proceed to show the nature of the language faculty Chomsky postulates, and demonstrate that it in no way relies on the notion of a Platonic proposition. Chomsky argues that this innate apparatus is genetically programmed and wired into our brains, and it determines how we speak and understand language.  Two of the most obvious features of language which all linguists recognise are that, we can understand and speak an infinite amount of sentences, and this is despite the fact that we have only a finite resources available in our brains. And two sentences have a fixed structure only words can only go together in a certain order and still be grammatical. Chomsky tried to capture these features in his pioneering work Syntactic Structures where he made these facts explicit. His aim was to be able to enumerate certain principles and then to be able to predict how certain sentences should go together and to test sentences to see if these predictions are accurate. According to Chomsky people’s grammatical intuitions are the raw data we have to work with, and the principles and parameters approach is the explanatory theory we construct to explain the data. Chomsky’s first insight into language is that it is not just a string of words which we learn inductively, rather we automatically group words into what he calls phrases. He argues for this position by constructing mathematical models of how language could be learned as a string of words inductively, and then proving that such models are not possible[1]. He called the model a Finite State Grammar. A Finite State Grammar consists of the following five features, (1) an initial state. (2) A finite number of states (3) A specification of transitions form one state to another (4) A specification of an initial symbol that will be printed when a particular transition obtains (5) A final state. (Lasnik: SSR p 12). In Syntactic Structures Chomsky constructed a finite state grammar consisting of (5) states. In the initial state the only option for the machine is to proceed to state (2), and when this is done, the machine must print the symbol ‘The’. From stage (2) the machine has two options go to stage (3) or stage (5), if it goes to stage (3) ‘man’ is printed, if it goes to stage (5) ‘men’ is printed. Once the machine has taken the option of either (3) or (5) its next option is determined, (3) must go to stage (4) at which point ‘comes’ is printed, and (5) must go to stage (4) at which point ‘come’ is printed. So there are two possible sentences that the finite state grammar will allow (A) The man comes. (B) The men come. The question is can such a machine be modified to make it capable of capturing infinity?

Now it is obviously easy to extend such a machine so that it can capture infinity. At stage (2) a third option can be added. The machine will now have three options It can go from (2) to (3), (2) to (5), or it can loop back on itself (2) back to (2), now the rule can be added that every time (2) loops back on itself the symbol old is printed. The finite state grammar now allows for infinite constructions. It can now print (A) The man comes, (C) The old man comes, or (D) The old old man comes, and so on to infinity. Now while such a device is obviously useful, and can capture an important feature of language (the ability to construct infinite sentences by finite means) it has obvious weakness. It cannot grasp certain obvious grammatical features of ordinary language, such as embedding, and cross serial dependencies.

Consider[2] the following phrase ‘an anti missile missile’ which is a missile used to defend against missile attacks. Now presumably if I knew that my enemies had ‘an anti-missile missile’, I would proceed to try and develop ‘an anti-anti missile missile missile’. As Lasnik put it ‘There are presumably technological limits on how far one can go in creating this kind of weapon, but there are no linguistic limits as far as the corresponding word formation process is concerned. Lets notate the situation as follows: (45) antiⁿ missileⁿ+1. (Lasnik p15). The important point to note is that a finite state grammar cannot capture this basic grammatical feature of language. If one were to construct a grammar with a loop at the end of anti, and one at the end of missile one would be able to create as many ‘anti’s’ or as many ‘missiles’ as one wanted, however one would not be able to correlate the two in the way suggested by (45) without adding an abstract phrase structures. In order to explain the features of language we must assume that words are grouped together in terms of abstract phrase structure rules which determine the way they can be grouped together.

As I have already said finite state grammars had no way of keeping track of how many ‘anti’s’ and how many ‘missiles’ there was in the sentence. Context free Phrase structure Grammars can perform this deed, and they do so by introducing the two words at the same time. Consider the above grammar (a) Ʃ:S and (b) F→aSb, now the initial designated symbol is S is what is known as an abstract non-terminal symbol which will not be part of the sentence, so it will have to be rewritten as a sequence of terminal symbols. So as can be seen from (b) we are to rewrite S as aSb. Now from (a) and (b) we construct the following derivation. We will call our derivation grammar x, and it has the following structure,

Grammar X: Step one: S (following a)Step two: aSb (following b)Step three: aaSbb (by reapplying b step2)Step four: aaaSbbb (reapplying b to 3)

Obviously we can carry on this sequence to infinity and using this process we can keep track of cross serial dependencies. The important point to note is that it is the abstract structure (cross serial dependencies) that makes it possible for the numbers of a’s and b’s to be correlated. This phrase structure grammar above is obviously very different from our natural language, so I will now use the same technique to construct a model which is a bit closer to natural language.

So let us begin the way we did above by stating what our initial symbol is and then outlining our rewrite rules. (1) Ʃ: S (2) S→NP VP

NP→N

VP→V

N→Arsene

N→Alex

V→Thinks

V→Laughs

V→Cries

Using the above rules one can construct the following sentences using what is known as a derivation. When one is using a derivation one basically tries to get rid of all of the non-terminal symbols.  So our derivation goes as follows;

           Step 1: S

                  Step 2: NP VP (S→NP VP)

                  Step 3: N VP    (NP→N)

                  Step 4: Mary VP (N→Mary)

                  Step 5: Mary V    (VP→V)

                  Step 6: Mary laughs (V→Laughs)

STOP

As Cedric Boeckx (Bare Syntax 2008) correctly notes the above grammars cannot capture infinity; however if we want to capture infinity all we need to do is introduce a rule that re introduces S; For example VP→S. With this proviso in place we can capture sentences such as Arsene thinks Alex laughs, or Alex cries Arsene laughs.  So our artificial model has moved beyond the limits of a finite state grammar and can capture things such as cross serial dependencies. However there are some aspects of grammar which phrase structure grammars cannot capture, such as unbounded cross serial dependencies.  In order to show what these unbounded cross serial dependencies are and how Chomsky accounts for them I will again give concrete examples from ordinary language.

In English there are three different types of Auxiliaries (1) Modal Auxiliaries: (can, must, may, will, etc) (2) Have (had etc) (3) BE (am, is, was, were). Now by examining how these auxiliaries can be combined with other words in sentences and with each other sentences and still remain grammatical according to the intuitions of ordinary English speakers Chomsky uncovered a series of generalisations.  Now what Chomsky did was to discover a series of facts about the behaviour of English Auxiliaries and then formulate general laws about these linguistic regularities. All English sentences have a main verb and a noun, such as sentence (1) Arsene laughed. Some English sentences in addition to having main verbs have what are known as Auxiliary verbs. Consider sentence (2) Arsene may laugh. Sentence 2 as well as a main verb contains a modal auxiliary may. A sentence can also contain more than one Auxiliary (3) Arsene may have laughed. So sentence (3) contains two Auxiliary verbs a main verb and a noun.  A sentence can even contain three Auxiliaries (4) Arsene may have been laughing.  Now what Chomsky discovered was that there are regularities which govern how these auxiliaries behave.

When the main verb is combined with certain auxiliaries its structure remains unchanged, however when it is combined with certain other types of Auxiliaries its structure altered. Consider the following constructions (Taken from Lasnik p36)

(X) (a) Arsene may laugh.

(b) Arsene will laugh

(c) Arsene can laugh

(d) Arsene could laugh

(Y)  (a) Arsene has laughed

(b) Arsene is laughing

(Z)   (a) Arsene had laughed

(b) Arsene was singing

Obviously the Modal Auxiliaries (henceforth M) do not modify the main verb whereas with the other Auxiliaries do  modify the main verb.  So from the behaviour of Auxiliaries Chomsky formulated some generalisations, such as the following. Generalisation 1: When a sentence contains a modal auxiliary (M) it is always the first thing after the subject. Generalisation 2: When have and be occur, be immediately follows have.

Now we can take a further look at examples like the above ones and see if we can find further generalisations. Consider for example sentences with no auxiliary verbs. (Q)(a) John owns a house (present)

(b) John owned a house (past)

(c) *John owned a house (bare)

(d)* John owning a house (progressive)

(e)* John owned a house (perfect)

From the above data we can derive Generalisation 3: If the main verb is the first verb like thing in the sentence, then it can appear in the ‘present’ or ‘past’ form but not in the ‘bare’ ‘progressive’ or ‘perfect’ form.(Lasnik p38) Chomsky discovered that generalisation 3 works also for   ‘all verb’ like things such as Modal auxiliaries, have and be. So from this fact he abstracted to Generalisation 4: What ever ‘verb like’ thing is first in the sentence, it will appear in the present or past form. From here Chomsky examined whether he could find generalisations for the second ‘verb like’ thing in a sentence, and for a third verb like thing in a sentence. It is his generalisation he discovered for the third ‘verb like’ thing which concerns us here. Generalisation 5: Whenever BE occurs, the next ‘verb like’ thing appears in the ‘progressive’ form. So when a sentence has BE in it has ing. As Lasnik put it Be and ing go together but they don’t go together.

The above point can be illustrated using concrete examples, take the following sentence (1) Arsene has been laughing. As a direct result of the BE auxiliary being used the ing gets affixed to the verb laugh, and this is a concrete example of unbounded cross serial dependencies. Now given that phrase structure grammars cannot capture these unbounded cross serial dependencies in a non-ad hoc manner we need to use a different approach to capture this fact about language. And the fact which Chomsky appeals to in order to capture this feature of language is what he calls Transformions. It is at this point that Chomsky introduces his division between deep structure and surface structure, at the level of deep structure a sentence like (1) above will have the following structure; Arsene (has en) (be ing) laugh. It is through applying transformations that the sentence gets translated into the surface structure form of Arsene has been laughing, now it should be patently obvious that this deep/surface structure has nothing in common with the distinction between logical form and grammatical form.

            The above grammatical rules, and generalisations which Chomsky discovered led him to the conclusion that as we learn new words we automatically group them into phrases such as Noun Phrases, Verb Phrases, Preposition Phrases, and Adjective Phrases. A Noun Phrase is any phrase that is about the noun such as the phrase ‘the logical philosopher’. A noun Phrase can be broken down into the following structure NP→ (DET) A*N11, which means that a Noun Phrase consists of an optional determiner, any number of adjectives, and a noun. Further definitions are VP→ V NP, and S→NP VP. So a sentence can be broke down into various phrases which can be further broken down to words operating according to certain rules. A strange discovery was made by Chomsky when analysing these rules, the various phrases were all discovered to have the same structure.

Stephen Pinker summarised the rules that all phrases share in his book The Language Instinct, he claimed that there are four such rules. The rules are as follows (1) All Phrases have a head (2) All phrases have some role-players which are grouped with the head inside a subphrase (3) All phrases have modifiers which appear outside the subphrases (4) All phrases have a subject (The Language Instinct p110). I will explain these four rules in more detail before further discussing their significance.

The first rule that all phrases have a head is in a sense obvious; this simply means that the phrase is about the head. So the NP ‘The logical Philosopher’ is not about the determiner ‘The’ nor is it about the adjective ‘logical’, rather it is about the Noun ‘Philosopher’. The noun is the head of the phrase and the adjective is its complement. To understand the second rule consider the following sentence ‘Dave drank a pint of Vodka’, the sentence consists of a subject (Dave) who plays the role of agent, and an object (Vodka) which is a Patient. The Vodka plays the role of a patient because something is being done to it, while the subject is the agent because it is doing something to the object hence it has the role of agent. More complicated sentences can be broken down into the thematic roles of Agent, Patient and Recipient, e.g. ‘Van Persie passed the ball to Ramsey’ Van Persie (agent) the ball (patient) and Ramsey (Recipient). The sentence can be thought of in similar terms as Frege thought of it; as being broken down into function and argument. So the preceding sentence can be thought of as consisting of the three placed predicate ‘x passed y to z’, this incomplete predicate is a function which becomes complete when the arguments (appropriate names) are mapped on to the variables. The variables in this case are the three nouns which are various role players, the agent, the patient, and the recipient. The type of role the various names will play will be determined by the lexical information that is stored in the brain about the predicate, for example passed will require an agent, a recipient and a patient, whereas drank just requires an agent and a patient. These role players are grouped with the head inside a sub phrase called an N-bar or V-bar.

The third rule is that all phrases have modifiers which exist outside of the sub phrase. Consider the following PP ‘from France’ this prepositional phrase is what is known as a modifier. The sentence ‘The captain of Arsenal from France’, can be schematised as follows S→(NP (Det the (Noun’ captain)( PP of Arsenal))(PP from France)). The PP ‘of Arsenal’ is about the noun ‘the captain’, to be a captain you have to be a captain of something, so they are intrinsically connected. However, the modifier ‘from France’ is not intrinsically connected to ‘The captain’ so is not grouped in the same phrase. So while the modifier is still part of the NP, it is grouped on another branch. The fourth rule is that subjects are given a special role in phrases, the subject is usually a causal agent, and is represented as Spec.

As Pinker noted that what is interesting about these rules is that all phrases share them, whether they are a NP, VP, AP or a PP, and that the rules which all of these Phrases are governed by must be an abstract set of principles. The principles are represented by an abstract schema called X bar theory, XP→ (SPEC) X’ YP which states that ‘A phrase consists of an optional subject, followed by an X bar, followed by any number of modifiers’12. It is these abstract principles which govern how words are grouped in sentences.

It may seem that the posit of underlying principles which govern our language use seems to be contradicted by the fact that people brought up in different countries speak different languages, Chomsky accounts for this fact with his discussion of parameters. Adriana Belletti and Luigi Rizza describe the nature of a parameter in the following manner ‘The child interprets the incoming linguistic data through the analytic devices provided by Universal Grammar, and fixes the parameters of the system on the basis of the analysed data, his linguistic experience. Acquiring a language thus means selecting, among the options generated by the mind, those which match experience, and discarding other options’ (Language and Mind P17). A simplified example of a parameter is what is known as ‘The movement parameter’, consider question formation; when forming questions human languages have two different options to choose from, the first option is to move the interrogative phrase (who etc) to the front, to a position in the left periphery of the clause, or leave the interrogative phrase in the clause-internal argument position in which it is interpreted (ibid p17). English takes the first route, (Who did you meet?), Chinese the second (You love who?), while French uses both rules at the same time. There have been no known languages which have violated this parameter so one can logically assume that it is an aspect of universal grammar. This in short is a non technical explanation of how Chomsky believes that a child goes from his meagre input to the torrential output (optiminal language expression).

The important point to note is that nowhere in the brief sketch above is the notion of a Platonic proposition needed. So Chomsky can be cleared of a charge of postulating Platonic propositions that Quine would find objectionable. However Quine is not just concerned with propositions as abstract entities as the following quote shows:

My objection to recognising propositions does not arise primarily from philosophical parsinomy- from a desire to dream of no more things than are in heaven and earth than need be. Nor does it arise, more specifically, from particularism- from a disapproval of intangible or abstract entities. My objection is more urgent. If there were propositions, they would induce a relation of synonomy or equivalence between sentences themselves: those sentences would be equivalent which expressed the same proposition. Now my objection would be that the appropriate equivalence relation makes no objective sense at the level of sentences’(Quine: The Philosophy of Logic p. 2 ).

Chomsky does accept the existence of propositions in terms of determinate synonomy relations in a way which contradicts Quine. I have shown in two eariler blogs ‘Indeterminacy of Translation and Innate Concepts’ and ‘Chomsky and Quine on Analyticity’ that Chomsky’s views on these topics are not justified empirically. Here my main point was to show that Chomsky is not commited to the existence of Platonic Propositions.

 


[1] Modern mathematical research has refuted Chomsky’s claims that recursion cannot be learned statistically.

[2] Here I am following Lasnik’s Syntactic Structures Revisited.

Hacker and Bennett on Mental Imagery

surtymind's avatarThe Indeterminacy of Translation and Innate Concep

Ordinary Language Philosophy and Mental Imagery

Hacker and Bennett both argue against claims that imagery depictational. This view strongly contrasts with the experimental work of Kossyln and Ganis who claim that their neuroscientific research shows that topographically organised neuronal configurations which map on to visual patterns in the world show that our imagery is in fact depictational. However Hacker and Bennett argue against this view by noting mental imagery does not meet the criterion for being depictational.

They note that when something is depictational it has a depictational and non-depictational elements. Thus a picture which is used to depict a dog has some non-depictational elements such as the paper used, the paint used etc. As well as depictational elements such the spatial relations of the part of the picture and the color of the painting etc. The same is true of piece of writing, so the ink and paper or…

View original post 1,429 more words

Linguaform Philosophy of Mind Part 2

                      LINGUAFORM PHILOSOPHY OF MIND: PART 2

In my last blog I summarised three arguments which are typically used by philosophers to argue that having a language is necessary for thinking. In this blog I will criticise the first of these arguments and show that it does not work and hence are not conclusive evidence that language is necessary for thought. The three arguments were: (1) We do not have sufficient evidence to justify attributing an ontology to a child who has not mastered the syntax of quantification, (2) If we attribute thought on the basis of reliably discriminating aspects of their environment then we will end up attributing thought to thermometers etc. (3) When we attribute beliefs to non-linguistic creatures they end up behaving in an anomalous manner in relation to referentially opaque sentences. Each of these arguments require a detailed critique so I will give each argument its own blog.

 

Argument 1: An Experimental Reply:

In their 1990 paper ‘‘Ontological categories guide young children’s inductions of word meaning’’[1]  Soja, Carey, and Spelke tested whether the ontological distinction between objects and non-solid substances conditions the projection of word meanings prior to the child’s mastery of count/mass syntax. Quine denied that children make any ontological commitments prior to learning the syntax of quantification which helps them master the count/mass noun distinction. Here Quine is making an empirical claim that prior to grasping the count/mass distinction, an agent like Mother, a property like Red, and a non-solid substance like Water are on a par. According to Quine, it is only when a child has mastered the apparatus of divided reference through grasping the syntax of quantification that the child can distinguish these substances. Soja et al. set out to test these empirical claims of Quine’s. Before outlining their experiment I will outline Quine’s views on language acquisition.

                                             QUINE’S POSITION

 

Quine claims that when young children mouth words such as ‘Mama’, ‘Water’, or ‘Red’, we are in no position to state that they are using the words as terms which refer to the same things which we refer to by the sounds.

For though we may fully satisfy ourselves that the child has learned the trick of using the utterances ‘mama’, and ‘water’ strictly in the appropriate presences, or as a means of inducing the appropriate presences, we still have no right to construe these utterances in the child’s mouth as terms, at first, for things or substances. (1969, 7)

 

According to Quine, from our own mature perspective, we have come to view the child’s mother as a body which revisits the child from time to time, and water as a scattered object. However, from a behavioural perspective, we have little justification for imputing this ontology onto the child.  After making this negative point about our lack of justification for imputing our mature ontology onto the child, he then goes on to make a positive point about the nature of the child’s ontology.

But the mother, red, and water are for the infant all of a type: each is just a history of sporadic encounter, a scattered portion of what goes on. His first learning of the three words is uniformly a matter of learning how much what goes on about him counts as the mother, or as red, or as water. It is not for the child to say in the first case ‘Hello mama again’, in the second case ‘Hello another red thing’ and in the third place ‘Hello more water’. They are all on a par: Hello more mama, more red, and more water. (ibid., 7)

 

Here Quine is clearly claiming that young children who use words, such as ‘Mama’, ‘Red’ and ‘Water’ are not distinguishing them in terms of being respectively Objects, Properties, and Non-solid substances. His reason for arguing so is that we have no positive behavioural evidence to support the claim that children make such distinctions, and in the absence of such positive evidence, there is little reason to impute such a rich ontology to young children. Quine’s claim is that we should only attribute to children the ability to distinguish between Objects, Substances and Properties when we have behavioural evidence which supports us making this distinction.

Progressively, however, the child is seen to evolve a pattern of verbal behaviour that finally comes to copy ours too closely for there to be any sense in questioning the general sameness of conceptual scheme. For perspective on our own objectifying apparatus we may consider what steps of development make the difference between ‘mama’-babbling infant who cannot be said to be using terms for objects, and the older child who can. It is only when the child has got on to the full and proper use of individuative terms like ‘apple’ that he can properly be said to have taken to using terms, and speaking of objects. Words like ‘apple’ are not words like ‘mama’ or ‘water’ or ‘red’ are terms whose ontological involvement runs deep. To learn ‘apple’ it is not enough to have learned how much of what goes on around you counts as apple; we must learn how much counts as an apple, and how much as another. Such terms possess built in modes of individuation.  (ibid., 8)

 

Now Quine acknowledges that the child may learn ‘apple’ in the same way that he learns ‘mama’ or ‘red’ but he goes on to say that the child will never master ‘apple’ in its individuative use until he gets on with the scheme of enduring physical objects. And in order to get on with the scheme of enduring physical objects, the child will need to master the apparatus of identity, difference etc.  Quine claims that to be able to tell if the child has got the trick of individuation down, we need the following:

How can we ever tell if the child has got the trick of individuation? Only by engaging him in sophisticated discourse of ‘that apple’, ‘not that apple’, ‘an apple’, ‘same apple’, ‘these apples’, ‘another apple’. It is only at this level that a palpable difference emerges between genuinely individuative use and the counterfeits lately imagined. (ibid., 9)

 

It is at this stage that Quine claims that we are justified in attributing an ontology to the child. Prior to that, attributing an ontology to the child is making an unsupported conjecture which is not justified by the facts.  For Quine, our child learns the adjectives ‘same’, ‘another’, ‘an’, ‘that’, ‘not that’ contextually. First the child gets used to various longer phrases which contain them, and he gradually develops appropriate habits in relation to the component words as common parts and residues of those longer forms. He further speculates that the contextual learning of all of these various different particles goes on simultaneously, so that we gradually adjust them to each other as a coherent pattern of usage is evolved (ibid., 10). So the story of child ontology as Quine tells it is that the child’s words just represent scattered portions of what goes on and do not distinguish between Objects, Properties and Substances.

So Quine’s picture of a child learning language and the ontology which is implicit in this language involves pared down assumptions according to which we attribute to the child no more than is necessary to explain his verbal behaviour. Quine treats the babbling which a child begins to emit at the age of 12 months as a form of operant behaviour which is omitted rather than elicited. He claims that the family of the child will reinforce the child’s verbal behaviour (such reinforcement made possible by the child’s pre-linguistic quality space) in such a manner that the child’s use of observation sentences such as ‘mama’ will reliably distinguish between ‘mama’ portions of the environment, and ‘non-mama’ portions. However, at this stage we cannot credit the child with having an ontology; from the point of view of external verbal behaviour we have no reason to attribute to the child a concept of ‘Mama’ as a name of a spatio-temporal object, as opposed to being a name of a mere mass term like ‘Water’.  It is only when we engage the child in discourse and he can answer the questions using terms such as ‘not that mama’, ‘same mama’, ‘another mama’ etc., that we are justified in attributing to the child a concept of ‘mama’ as an object as opposed to scattered portion of mama environment etc.

                                    TESTING QUINE’S CLAIMS

Soja et al. conducted their experiments to test Quine’s claim (1960, 1969) that young children only develop an ontology after they have grasped the syntax of quantification.  Contrary to Quine, they claimed that young children have a distinction between different ontological categories prior to grasping the syntax of quantification, and that in fact these ontological categories constrain the process of language learning. They distinguished their views from Quine’s in the following way:

According to Quine, then, when children hear a new word, the meaning they assign to it is determined by procedure 0:

Procedure 0: Conclude that the word refers to aspects of the world that share salient properties of the perceptual experience when the word is used. (1991, 182)

 

Soja et al. proposed a different view of the procedures children use when they learn a new word; their procedure assumed that the child had ontological categories prior to learning the syntax of quantification.

Procedure 1    Step 1: Test to see if the speaker could be talking about a solid object; if yes,

Step 2: Conclude the word refers to individual whole objects of the same type as the referent.

Procedure 2 Step 1: Test to see if the speaker could be talking about a non-solid substance; if yes,

Step 2: Conclude the word refers to portions of substance of the same type as the referent.  (ibid., 183)

 

Soja et al. proposed an experiment which would decide between these two different proposals about how children learn new words.

One way for Soja et al. to test whether Quine was correct, or whether they were correct, was to test how children generalised when they learned words for different objects. If children could generalize prior to a grasp of the count mass syntax this would be evidence that Quine was wrong. They ensured that the experiment was done on children who are below the age of 2 ½, the age at which children master the syntax of quantification.  They tested how children generalise words to non-solid substances as well as to objects. If Quine is right that children generalise names by using Procedure 0, then children will generalize names based on shape whether the name originally refers to an object or a non-solid substance.

THE EXPERIMENT

Twenty-four 2-year-olds from the Greater Boston area were recruited and randomly placed into two groups (informative syntax groups, and neutral syntax groups), with equal numbers of boys and girls in each group. Each testing session began with two familiar trials: one object trial and one substance trial. The stimuli in the familiar object trial were a blue plastic cup, a white Styrofoam cup and cup pieces. The stimuli in the familiar non-solid substance trial were peanut butter and Play-doh. These trials followed the same format as the unfamiliar trials described below.  The two familiar trials were followed by eight unfamiliar trials: four object trials and four substance trials which were intermingled. The subjects were tested on each trial on two separate occasions. Eight novel words were used: ‘blicket’, ‘stad’, ‘mell’, ‘Coodle’, ‘doff’, ‘tanninn’, ‘fitch’, and ‘tulver’ (ibid., 187)

TEST 1: AN UNFAMILIAR OBJECT TRIAL IN THE NEUTRAL SYNTAX CONDITION

The test involved presenting the child with an unfamiliar object, e.g. a plumbing T-shaped pipe, and giving the child a name for the object, e.g. blicket. In the neutral syntax condition the child is told ‘This is my blicket’. The experimenter then continued to talk about the object using ‘my’ ‘the’ and ‘this’ as determiners. She and the subject then manipulated the object. The object was then placed to the side and two other sets of objects were then presented to the subject.  One set consisted of objects of the same sort as the original but made of a different material, e.g. a plastic T shape; the other set consisted of objects of the same material but a different shape i.e. bits of metal.  The experimenter then said ‘Point to the blicket’

TEST 2: AN UNFAMILIAR SUBSTANCE TRIAL IN THE NEUTRAL SYNTAX CONDITION

The child was shown an unfamiliar substance, and told ‘This is my stad’. The experimenters referred to the substance using only the determiners ‘my’ ‘the’ and ‘this’. The experimenter and the subject talked about the substance and played with it.  In the presentation of test substances, the subject was shown two substances, the original and the new one, and told ‘Point to the stad’.  The original substance was in the alternative configuration, whereas the new substance was in the configuration used originally with the named substance.  There were four pairs of substances: (1) Dippity-do (a setting gel), and lumpy Nivea (a hand cream mixed with gravel), (2) Coffee (freeze dried) and Orzo (a rice shaped pasta), (3) Sawdust and leather (cut to tiny pieces), (4) Crazy foam and Clay. Of each pair one member was named and the other was used as the alternative to the original in the test presentation. Each member served in both roles across subjects.

TEST 3: OBJECT AND SUBSTANCE TRIALS IN THE INFORMATIVE SYNTAX CONDITION

This condition differed from the neutral syntax condition only in the determiners and quantifiers used when naming the original stimulus. The experimenter introduced an object trial in the informative syntax condition with ‘This is a blicket’ and used ‘A blicket’ and ‘Another blicket’ in subsequent discussions. Substance trials in the informative syntax condition were introduced with ‘This is stad’ and in subsequent discussion the experimenter continued to omit determiners or use ‘some’ or ‘some more’. This was the only difference between the different informative and uninformative trials. In the familiar word trial subjects differentiated the object and the substance trials as predicted.

                                   WORD LEARNING TRIALS

Subjects differentiated the two types of trials.  Responses were consistent with shape and number on the object trials, and were not consistent with shape and number in the substance trials.

                                     WHAT THE TEST SHOWS

If before the child has grasped the syntax of quantification the child differentiates in the above manner, this shows that the child is not generalizing the word-based perceptual similarity, but is doing so based on the type of object he is presented with. So, for example, if he was generalizing according to an innate perceptual similarity quality space which focuses on shape then why does this not work for substances? The answer is because the child recognises that objects and substances are distinct ontological categories. Soja et al. summed up their results as follows:

In sum, the children chose according to object type when the stimulus was an object and according to substance type when the stimulus was a non-solid substance. There was no effect of the syntactic context: performance was neither facilitated nor hindered by the additional syntactic information.

The data from Experiment 1 show that different inferences about the meaning of a newly heard word are drawn according to the ontological status of its referent. If the word refers to an object, the child’s projection respects shape and number, and ignores texture, color, and substance. If the word refers to a non-solid substance, the child’s projection ignores shape and number, respecting texture, color and substance. (ibid., 192)

 

From this experiment, Soja, et al. claim to have shown that Quine’s view of how children learn language is incorrect because the experiment shows that, contrary to what Quine claims; children do indeed have a distinction between different ontological categories prior to grasping count/mass syntax. It also shows that these innate ontological categories are what help a child learn a language and not the apparatus of quantification. Soja et al.’s experiment purports to have shown that children learn words according to ontological distinctions which they exhibit knowledge of prior to learning a language.  So their experiment strongly indicates that Quine is wrong on the issue of whether a child needs to master the syntax of quantification in order to have ontological commitments. It should be noted that this experiment was done on a very small sample so it needs to be replicated and done on different samples before it can be considered a conclusive refutation of Quine’s position. Nonetheless the experiment is extremely suggestive and does indicate that Quine is wrong.

In the next blog I will consider argument 2 the thermometer argument and show how it can be handled by adopting the intentional stance. I will consider objections to the intentional stance reply in particular Donald Davidson’s intentional realism. In the third blog I will consider the referential opacity objection.


 

 

Linguaform Philosophy of Mind Part 1

LINGUAFORM PHILOSOPHY AND WHY LANGUAGE IS SUPPOSEDLY NECESSARY FOR THOUGHT

My friend Matt Bush when criticising analytic metaphysics plausibly accuses a lot of them of being too deeply wedded to interpreting reality in terms of language. Matt’s criticisms are in my view entirely to the point.  Since the birth of analytic philosophy the analysis of language as a way of understanding philosophical problems has been a central foundation. Philosophers like Frege and Russell by constructing artificial languages to help them speak more accurately about the foundations of mathematics set the ball rolling with analytic philosophy’s obsession with language. Wittgenstein  in his ‘Tractatus’ using the insights of Russell and Frege tried to construct a theory of meaning for language which he used to demarcate between statements with sense and nonsensical statements. His later ‘Philosophical Investigations’ took the facts of ordinary language more seriously and focused more on the conditions of sensible use of language as opposed to the more abstract logical view of language set out in his ‘Tractatus’. Nonetheless, in both the earlier and later phase of his philosophical development Wittgenstein thought that an analysis of language was the way to solve philosophical problems. Non-naturalistic philosophers like Austin, Ryle and Hacker have followed him in this approach to philosophy.

I have discussed ordinary language philosophy in detail in other blogs so I will not go into it in this blog. More naturalistic inclined philosophers have followed Frege, Russell and Wittgenstein in making language central to philosophy. They have used language as ways of trying to solve philosophical issues in both ontology and epistemology. Matt has criticised non-naturalistic analytic metaphysicians attempts to understand metaphysics through linguistic analysis. I share his misgivings when it comes to analytic metaphysics, though that will not be my focus here, I will instead focus on a misplaced emphasis on language in naturalistic philosophy of mind. The emphasis on language which has been with analytic philosophy from the start I will argue has had a distorting influence on how naturalistic philosophers have thought about the mind.

Before proceeding I should add that I think that the linguistic turn while excessive has lead to some real improvements in the way that philosophy is done. So my aim here is not to slate linguistic philosophy, merely to curb some of its excesses in the philosophy of the mind. In the second part of this blog I will argue that naturalistic analytic philosophers have vastly overestimated the degree to which cognition is tied to linguistic abilities. However prior to doing this I will firstly try to summarise the best arguments in favour of the view that thought requires language.

Contemporary(ish) philosophers who make language central to cognition such as Davidson, Dennett, Rorty, and Brandom have two primary influences Wilfrid Sellars and W.V. Quine. I will now try to outline some of the best arguments from the preceding six thinkers that language is central to thought.

In his 1956 ‘Empiricism and The Philosophy of Mind’, Sellars argued that if we want to interpret a creature to being a thinker we need to do so in terms of language. Here is Sellars on the topic[1]:

“All awareness of sorts, resemblances, facts, etc., in short all awareness of abstract entities-indeed, all awareness even of particulars-is a linguistic affair. According to it, not even the awareness of such sorts, resemblances, and facts as pertain to immediate experience is presupposed by the process of acquiring the use of language” (Science, Perception, and Reality, p, 160)

Sellars notes that we can distinguish between two different kinds of awareness. The first type of awareness is the ability to reliably distinguish between aspects of ones environment. The second type of awareness is awareness that comes from being in the logical space of reasons and having the ability to justify what one claims or believes about the world.

Obviously when one considers Sellars first two types of awareness the first does not require language while the second does. Sellars argues that the first sense of awareness is so permissive that it includes virtually everything as aware. In his (1995) ‘The Problem of Objectivity’ Davidson agrees with Sellars that awareness 1 (classification) is too permissive to be considered a criterion for saying what creatures are aware:

“My reason for resisting this usage is that if we were to accept it we would be committed to holding that the simplest animals have concepts: even an earthworm, which has so little brain that, if cut in two, each part behaves as the undivided whole did, would have the concepts of dry and moist, of the edible and inedible. Indeed, we would have to credit tomato plants or sunflowers with the concepts of day and night.” (Davidson: 1995 p. 8)

Robert Brandom in his (1995) ‘Making it Explicit’ makes a similar point:

“Reliable differential responsive dispositions are only a necessary condition for observational knowledge. Parrots and thermometers can have such dispositions and so can be used by us in effect as measuring instruments to acquire knowledge. But what they have is not knowledge. For they do not understand the significance of their responses; they do not take these responses as reasons for further claims; and they do not understand claims as potentially in need of reasons” (Brandom: 1995 p. 215)

So philosophers like Rorty, Davidson and Brandom amongst many many others follow Sellars in thinking that a creature who we discover through behavioural tests can reliably distinguish between an aspect of their environment is not necessarily aware. For true awareness to occur a creature needs to have the ability to place themselves in the space of reasons.

Sellars argument that if we use the criterion of being able reliably distinguish between aspects of their environment as the criterion of awareness then we will be committed to the view that thermometers and plants are aware, has convinced a lot of philosophers that justification is the best criterion[2]. So Sellars argument is one of the key reasons that philosophers have for thinking that language is central to thought.

However Donald Davidson merges the views of Quine with those of Sellars to provide further arguments that language is necessary for thought. Quine argued that prior to mastering the syntax of quantification we have no justification for imputing an ontology to a child. A child may learn to mouth a sound e.g. ‘MAMA’ in certain intersubjectively salient circumstances, the child may even internalise recursive abilities. However these abilities do not prove that the child has an ontology. When the child says ‘MAMA’ we have no behavioural evidence which indicates whether the child is using it as a term,  or as an observation sentence, as a count noun, a mass term, etc. Davidson following Quine argues that:

“What calls for ontology is the apparatus of pronouns and cross reference in natural languages, what we represent by the symbolism of quantifiers and variables in elementary logic. These devices provide the resources for constructing complex predicates, and at this point semantics must map names and predicates on to objects.” (Davidson: 2001 p. 140)

While Sellars argument above has a lot to offer, Quine’s argument is manifestly false and has been shown to be so by the experimental work of Soja et al. 1991 which I will review in the next blog.

A further argument against attributing propositional attitudes to non-linguistic creatures is the difference between the logic of referential opacity in linguistic and non-linguistic creatures. So, for example, consider the statement (1) John believes that ‘Batman caught the Joker’, and (2) John believes that ‘Bruce Wayne caught the Joker’.  Sentence (1) can be true while sentence (2) can be false. This is because even though Bruce Wayne and Batman refer to the same person John may not know this. We can see from the preceding examples that when quantifying into belief contexts referential opacity obtains (the same is true of quantifying into modal contexts). When dealing with ordinary sentences referential transparency obtains. Thus consider (3) Batman caught the Joker. And (4) Bruce Wayne caught the Joker. If Batman and Bruce Wayne refer to the same object then (3) and (4) must have the same truth value.

Davidson notes that the logic of belief ascription changes when we consider non-linguistic creatures. So he asks us to consider the case of a dog chasing a cat the cat runs up tree x and the dog who doesn’t see this starts barking up the tree at the cat. Davidson notes that a person viewing the scene could say that the dog is barking up the wrong tree. So, for example, I could say (5) The dog believes ‘that the cat is up the oak tree’, or (6) The dog believes that ‘the cat is up the tallest tree in the forest’. Now in the case of a linguistic creature, for example me, I may not know that tree x is both the tallest tree in the forest and an oak tree, so one could say that (5) and (6) are referentially opaque. However things are different with a non-linguistic creature. (5) and (6) do not behave in the same logical way as they do for linguistic creatures. We have no way of saying that (5) and (6) change their truth value in referentially opaque circumstances because there is no fact of the matter as to whether the dog thinks that ‘the tallest tree in the forest’ and ‘the oak tree’ refer to the same object. Davidson argues that since referentially opaque sentences changing their truth value in belief contexts does not apply to non-linguistic creatures then this shows that we MAY not be justified in applying propositional attitude explanations to such creatures.

So the three primary arguments that philosophers use for the claim that only language using creatures can think are (1) It is over inclusive to claim that creatures who can reliably distinguish between aspects of their environment have thoughts (thermometer objection) (2) Only creatures who can have mastered the syntax of quantification have an ontology (3) Belief ascriptions to non-linguistic creatures behave in logically anomalous ways which imply that creatures are not really believers.

In my next blog I will critique the above three arguments and show that they do not actually demonstrate that non-linguistic creatures cannot think. In my final blog of this series I will consider Dennett’s more subtle linguaform views as they apply to consciousness and will show that his views also do not work.


[1][1] Sellars quotes taken from Richard Rorty ‘Philosophy and The Mirror of Nature’ pp 182-192.

[2] Dennett bites the bullet and argues that there should be no problem with attributing intentionality to certain artefacts as is pragmatically needed.

Lacan and Cognitive Science Part 2

                               LACAN AND COGNITIVE SCIENCE: PART 2

                                        LANGUAGE AND THE MIRROR STAGE

Lacan tells a developmental story which involves a child going through what he calls the mirror stage. At six months children begin to recognise themselves in the mirror. Other primates, for example Chimpanzees go through this stage as well. However while Chimpanzees quickly get bored of their own appearance human children remain fascinated with their own reflections. According to Lacan when children enter mirror phase they are going through a key human developmental phase which involves identification. The child at this point in his development has limited motor control and is entirely dependent on his care givers for everything they receive. The mirror image that the child identifies with is a solid fixed image which holds a stark contrast to the child’s subjective fragmentary experience of himself. The child at this stage identifies with an Ideal-I, his mirror image which becomes his ego.

For Lacan this mirror stage is a beginning of the child developing his ego. The ego becomes further developed as the child begins to learn language. When Lacan talks about learning a language it is important to be clear from the outset how he understands language. His meaning of language is very different from what contemporary generative linguists understand by language. When discussing language generative linguists like Chomsky are speaking about what they call I-language. Chomsky defines I-language as internal, intensional and individual. This conception of language is an idealised conception, which abstracts from things like memory limitations, and day to day performance factors. The child begins with his UG which is his fixed system that he is born with, this UG consists of a set of fixed principles some of which are subject to parametric variation, the linguistic environment that the child is born into will determine what way the various parameters are set. When the child is brought up in a normal environment and his parametric are fixed he ends up developing his own I-language e.g. French and English. So it is the world’s I-Languages that linguists like Chomsky are studying and by comparing the worlds I-languages they aim to discover the nature of the UG which makes it possible for people to acquire their various I-languages.

Chomsky’s conception of language focuses on the competence that is acquired by the subject and on the mechanism that goes into acquiring this competence. Other conceptions of language have different starting points to Chomsky. I-language is a part of the individual’s biology. Any theory of I-language will have to eventually cashed-out neuroscientifically.  A different approach to language is to study what is called E-language. E-Language is external and extensional. So a linguist interested in E-language will be interested in the various different spoken languages and written languages of the world. They will try to catalogue how the various different languages in different communities and of different individuals will differ. So the job of an anthropologist or linguist studying E-language will be Taxonomic cataloguing the various different rule systems governing language.

These two positions do not exhaust the ways one can study language, and neither position accurately captures the ordinary language notion of what exactly a language is. It is obviously the job of an ethno-scientist to discover what folk-beliefs about the nature of language are. I-language and E-language are not necessarily entirely at odds with each other. So, for example, generative linguists typically make poverty of stimulus claims and claims about universal rules of language that can only be justified by studying linguistic behaviour and the primary linguistic data a child is exposed to. So a generative linguist will need to some degree be aware of E-language.

Since Lacan makes heavy use of language in his psychoanalytic theory it is important be clear from the outset how he conceives of language. One might think that because Lacan is giving a psychological theory that his conception of language will be closer to the I-language conception. He is after discussing how a child acquires a language. However this is not the case. Lacan’s conception of Language is an E-language conception. To see this let us consider what Lacan’s views on language are.

Lacan describes language as the discourse of the other. Language exists prior to the child being born. Prior to being born the child is assigned a name by his parents. His parents typically decide to bring their child into the world for a need that they have. The child is born with a pre-assigned place in the world. When the child begins to articulate his needs he must do so in the language of his family. Lacan argues that in learning to speak we are in effect internalising the discourse of the others. So our conscious ego is infused with the language of the other. This language (unlike Chomsky’s I-language) cannot be reduced to a brain process, as it exists outside of the individual brain and is an ever changing property of the human species.

At this point I should note a linguist who wants to treat the worlds spoken and written language as some kind of abstract entity which can vary in a variety of unpredictable ways is within his right. Such a linguist will of course have to deal with rivals who purport to show that this supposed endless variation in language is actually subject to various constraints. However for a person offering psychological hypothesis things are different. If you treat language as some kind of external object which the child somehow internalises and which determines the child’s thought, then you need to offer a testable mechanism to show how this internalisation is achieved.

So, for example, the philosopher W.V. Quine to some degree conceives of language as external system of communication which can vary widely and be systematised extensionally. However he postulates cognitive systems which are used by the subject as they acquire their unique language: Innate similarity quality space, the ability to use analogical synthesis, to use induction, and to respond differentially to positive and negative reinforcement. Whether Quine’s cognitive model is sufficient to account for language acquisition is not the remit of this blog. My point is that he has model which at-least tries to account for the facts of language acquisition and is testable. People like Sampson and Lappin and Clarke offer different accounts of how language as an evolving variable is acquired. Lacan who claims that children have their desires overridden by language as they acquire it needs to offer a testable cognitive model if we are to evaluate his views.

When Lacan speaks of child development he cites Baldwin’s studies of how children recognise themselves in the mirror when they are six months. However, Lacan’s story about The Mirror Stage goes massively beyond the data. The child may look at the mirror at six months; however this is not really sufficient evidence to support the claim that the child identifies with the fixed image in any real sense. Furthermore this single piece of data about child development alone tells us nothing substantial about how the child acquires their language. Lacan speaks about how when the child wants to articulate his desires he must do it in terms of the language of the other. However, to speak of the child’s desire we need a cognitive model of how the child conceives the world prior to acquiring their language. To understand the child’s cognitive ability we need experimental and behavioural data. Lacan provides very little behavioural data to support his conjectures.

In his ‘The Lacanian Subject’ Lacanian therapist Bruce Fink makes the following claim about the mirror stage:

“Now the ego according to Lacan arises as a crystallization or sedimentation of ideal images, tantamount to a fixed, reified object with which a child learns to identify, which a child learns to identify with his or herself. These ideal images may consist of those that the child sees of him or herself in the mirror, and they are ideal in the sense that, at the age at which the mirror image begins to play an important role (six to eighteen months, the child is quiet uncoordinated and truly but a jumble of sensations, and impulses, the mirror image presenting a unified surface appearance similar to that of the child’s capable more coordinated parents. (The Lacanian Subject p. 36)”

However Fink and Lacan’s characterisation of the child pre-mirror stage as a bundle of sensations and impulses is a massive exaggeration. We do know certain facts about pre-linguistic children and what stages they go through as they develop language. A few hours after birth children are drawn to look longer at drawings of human faces than at other patterns (Fantz 1963).From 3 months children are monitoring the eye gaze of others. (Cohen 1995). From 4 months humans have some grasp of object behaviour (Baillargeon, et al., 1985). At 6 months children begin reaching for objects. So prior to going beginning the mirror stage at 6 months the child already exhibits knowledge of object behaviour and recognise their carers voices and faces. And over the next 12 months as they continue in their mirror phase they display a variety of different cognitive abilities which go far beyond what Fink and Lacan allow. So, for example, at 9 months children begin to recognise intentionality. At 9 months proto-declarative referencing begins (Cohen 1995). At 10 months children begin to crawl. Between 9-12 months children begin to engage in triadic behaviours. At 12 months children begin pointing and this brings them deeper into the world of triadic relations. From 12- 16 months children begin to learn 0.3 words per day (Bloom, 2002). From 14-16 months children begin to walk.  From18-24 months children display a deep understanding of others intentional aims.

Thus far we have seen that Lacan has provided no cognitive apparatus which shows how children acquire their language. His model of the cognitive process which children have when they enter the mirror stage massively underestimates the intellectual capacities which the child has. The preceding weakness in Lacan’s theory do not prove that his theory of how by entering into language we in effect internalise the discourse of the other. It does show that his model is empirically under motivated.

In some respects Lacan’s discussion of our ego being the discourse of the other is similar to Dennett’s idea of the self as a centre of descriptive gravity. If we think of language in terms of memetic theory, we can get some a vague idea of what Lacan means. When a person acquires a language they acquire a theory of their culture and this theory is a theory that the child primarily.  In ‘Consciousness Explained’ Dennett makes the following claim:

“Selves are not independently existing soul-pearls, but artifacts of the social processes that create us, and, like other artifacts, subject to sudden shifts in status. (Consciousness Explained p, 423)”

Dennett’s above claim sounds very similar to something Lacan would say. The only difference being that, Dennett unlike Lacan, provides cognitive architecture to support his claim.

One justification which Lacan provides for his psychological model is in terms of its success as a therapeutic theory, so in the next blog I will consider how Lacan uses his theory to help with hysteria. Hysteria known today as conversion disorder is sometimes exemplified in post-traumatic disorder, gulf war syndrome. Symptoms include feeling unwell, lipothymias, vague diffuse states of fatigue, angry outbursts. (spasms, and paralysis are normally only found in women not men), but multiple conversion pains are common in men, as well as fear of heart disease, digestive disturbances, poorly defined neurodigestive disorders (Dor p, 97). According to Lacanian psychoanalyst Jol Dor there is a basic tendency to seduction in male hysteria.  A need to make sure that he is universally loved. He is above all bent on being loved by everyone. Hence a major feature of hysteria is a feeling of being unsatisfied. He cannot take pleasure in what he has but regrets what he does not have. Once the hysteric is assured that his desire can be potentially fulfilled, he renders himself unfit to achieve it through anxiety, depression or even neuroanastesia. (Dor p, 99)Alcoholics and drug addicts are SOMETIMES hysterics. What function do these addictions play for hysterics? It helps him give the impression to the woman that he possesses what she wants ‘The Phallus’. He sees the female other as an idealised object and hence tries to avoid direct personal confrontation with the woman in the area of sex. The result is the institution of behaviours, of which the most common is the homosexual mask or game, according to Dor this is not true homosexuality. It is rather a parody of homosexuality. It provides the reassurance of secondary compensation. The male hysteric’s homosexuality often entails compulsive masturbation accompanied by a fantasy with a perverse connotation, usually lesbian fantasies. Impotence and premature ejaculation is typically a feature of the male hysteric. So impotence and premature ejaculation and homosexuality are ways of keeping men away from sexual encounters with the idealised woman. According to Dor a possible origin for male hysteria is that the mother putting the child in a position of providing for her lack.

The above description of hysteria was taken from Jol Dor’s book ‘The Clinical Lacan’. The description makes a series of empirical claims by Lacanians on the nature of hysterics; in the next blog I will discuss whether there is sufficient empirical evidence to support these claims.

Chomsky and Quine on Analyticity part 2

CHOMSKY AND QUINE ON ANALYTICITY PART 2

ANALYTICITY NATURALISED?

                                    SECTION 1: Types of Analyticity

In Part 2 I discussed Quine’s arguments against the notion of analyticity. It was noted that Quine was attacking three notions, (1) Analyticity as a defence of the a priori (2) Analyticity as a form of necessity, (3) Analyticity as a purely semantic truth independent of collateral information. Chomsky’s claim that an analytic/synthetic distinction was a fact of natural language was then discussed. Throughout the discussion I treated Chomsky’s arguments as pertaining to semantic analyticity. By semantic analyticity I mean analytic truth as being true in virtue of the meaning of the terms and independent of collateral information. In this Part I will consider whether Chomsky’s positive conception of analyticity can support a priori knowledge, and our grasp of necessary truths.

As we have seen above, when Chomsky is discussing language and mind he is doing so in a thoroughly naturalistic manner. Our language results from an innate module of the mind which he attempts to explain in a purely naturalistic manner. Chomsky tells an evolutionary story about the origins of language (See for example Chomsky, Fitch, and Hauser 2002).  He argues that any analytic connections in natural language result from the internal structure of the innate language faculty. He also claims that human concepts have idiosyncratic features which were of no particular use to our ancestors so it is unlikely that they were formed through natural selection. He assumes that if our innate conceptual abilities were not formed through natural selection then they must have resulted from a random mutation. Obviously, conceptual abilities which resulted from a random mutation will in no way provide us with any kind of a priori justification.

When discussing the rules which govern our language, Chomsky has been quite explicit on the fact that we are not justified in applying the rules of natural language.  He argues that we just automatically implement them in the way a computer implements a programme:

In short, if I follow R, I do so without reasons. I am just so constituted. So far, these conclusions offer no challenge to the account discussed earlier. I follow R because S` maps the data presented into S*, which incorporates R; then I apply R blindly. There is no answer to the Wittgenstein Sceptic and there need be none. My knowledge, in this instance, is ungrounded….I have no grounds for my knowledge in any useful sense of the term and no reasons for following the rules: I just do it. If I had been differently constituted with a different structure of my brain, I would have followed a different rule. (1986b, 225)

 

In the above passage, Chomsky is responding to Kripke’s rule following paradox. He is claiming that he does not have a solution to the paradox. He cannot say why calculating according to the plus function is justified whereas calculating according to the quus function is not justified. His answer is to point to the fact that we do just calculate one way as opposed to the other way, and to explain this fact by invoking innate constraints on the type of mathematical rules we are capable of following. The innate structure of our brain means that we interpret ‘+’ in terms of ‘plus’, however if our brain was structured differently we may well interpret ‘+’ in terms of ‘quus’. I will not here discuss Chomsky’s solution to the rule following paradox; my main aim is to show that his answer to Kripke shows the folly of reading Chomsky’s conception of analyticity as an epistemological conception.

Chomsky has explicitly claimed that people are born with innate concepts some of which will as a matter of empirical fact yield analytic connections. He has furthermore claimed in a number of places, that concepts of natural language do not refer to entities in the mind-independent world in the manner in which people like Quine, Frege, and Tarski believe they do. This being so, it is clear that on the Chomskian picture concepts like BACHELOR and UNMARRIED MEN do not mean the same thing because they pick out the same mind-independent entity, rather, they both mean the same thing because they both pick out the same concept.

This view of course puts Chomsky in a very weak position. It means that we cannot justify our sentences as being true by virtue of the meaning of the terms involved. Instead we just automatically categorise them so, and if our brain was constituted differently we would categorise them differently. A number of points follow from this immediately. The types of analytic connections which Chomsky defends have no epistemological significance. The analytic connections he is concerned with are not connections which justify us in holding certain beliefs true.  So it could be argued that the analytic sentences which Chomsky is concerned with are of a philosophically uninteresting type.

However such an interpretation is complicated by the following quote from Chomsky.

These are truths of meaning, not of fact. The a priori framework of human thought, within which language is acquired, provides necessary connections among concepts, reflected in connections of meaning among words and, more broadly, among expressions involving these words… (2000b, 63)

 

When Chomsky talks about an a-priori framework of human thoughts and about necessary connections amongst concepts he seems to be making claims which his explicit theory would not allow. No contingent fact about our brains would justify us holding a belief true, while it is likewise highly unlikely that a contingent feature of our brain structure would somehow put us in touch with a necessary truth about the universe.  So it seems that Chomsky’s claim about humans being born with analytic connections amongst concepts is inconsistent with his claim that analytic truths provide a priori justification or knowledge of metaphysical necessity.  Throughout the remainder of this section I will endeavour to reconcile these seemingly inconsistent views.

Given that Chomsky has claimed elsewhere that our syntactic rules are not justified but are merely followed blindly, it is obvious that we should not interpret Chomsky’s claims about an a priori framework of human thought epistemically.  Rather we should instead interpret an a priori framework of thought to mean an innate framework of thought. But this of course leads to the question of what Chomsky means by necessary connections amongst concepts? Consider the concepts: 2, 4, +, = these can be combined to create the necessary truth 2+2=4. Now philosophers such as Hume and Ayer have claimed that the truths of arithmetic are analytic truths. Given Chomsky’s various discussions of analyticity, and his views on concepts expounded in his reply to Kripke, it seems that Chomsky also considers the truths of arithmetic as analytic, in the sense of being true based on the meaning of the terms involved. But it seems that this type of analyticity is not strong enough to support metaphysical necessity.

In order for our analytic truths to yield metaphysical necessity, there would have to be some sort of correspondence between the truths of meaning which are encoded in the brain and necessary truths which exist in an unchanging mind-independent world. If the connection were a mere contingent one, then it would have been just the result of time and chance. To grasp the distinction between contingent and necessary truths using the framework of possible worlds is sometimes helpful.  According to Chomsky, it is a fact of the human brain that some of the concepts we are born with will automatically yield analytic connections. Surely, though, this being so, it is also possible that the human brain could have been constructed differently. It is obviously possible that humans could have been born with innate concepts but no analytic connections between their concepts: call such creatures Fodorsapiens. If we think of the analogy between male and females who are both members of the species Homosapien, it is also possible that there were two different types of Homosapiens, who were distinct from each other but could reproduce with each other; call them Chomskysapiens and Fodorsapeins. Both strands of Homosapiens are descended from the same ape ancestors and they just diverged from each other because of a random mutation which did not affect their ability to reproduce with each other. We can imagine in our story that Fodorsapeins eventually died out because their slightly different DNA made them unable to fight of the common cold. The point is that there are many possible worlds which can be constructed in which it was Chomskysapiens who died out, or in which a random mutation changed which sentences the Chomskysapien finds analytic. The truths of meaning which Chomsky claims are component of the mind brain seem to be nothing more than contingent results of our evolutionary history, and his claim that they provide necessary connections remains unargued.

So it would seem that since the brain is the contingent product of its evolutionary history, then any concepts it is innately born with will correspond with a mind-independent reality, if at all, only coincidently. Here, of course, the type of analytic truths will be of paramount importance. If we are to count the truths of arithmetic as analytic, then we can sketch a story of how an innate number sense would be vital to survival. We could cite how a creature who could not calculate whether the number of foes was equal to the number of friends would not survive very long. We could point to creatures such as monkeys and dogs who indicate a grasp of a number sense to show how this sense is probably a universal feature of any creature who wants to survive in a hostile environment.

When one tries to explain other analytic truths such as ‘All bachelors are unmarried men’, or ‘All dogs are animals’, sketching a story in terms of survival value seems more difficult. The idea of ‘All bachelors are unmarried men’ being a necessary truth or a truth which is justified independent of the facts seems unintelligible. However the idea that the sentence is analytic because ‘bachelor’ and ‘unmarried’ correspond to the same mentalese concept is obviously intelligible.  So the question we are faced with is the following: since analytic connections are not limited to the language faculty, what is their place in the overall taxonomy of the mind which Chomsky has constructed so far?

In a personal communication Chomsky tried to clarify some of these questions for me. He claimed that:

In this context, “a priori” means what Konrad Lorenz called “biological a priori,” that is, part of the cognitive equipment with which an organism organizes and approaches external data, mapping it into experience, and then interpreting it.  The issue of justification doesn’t arise, any more than it does in the case of “book” rhyming with “took” (in my speech).  This doesn’t conflict with Boghossian. He is talking about something different. (Chomsky: personal communication).

 

So Chomsky here agrees with my analysis of his views on analyticity. His views offer nothing towards helping us clarify such notions as a priori knowledge and necessary truths.

Even if Chomsky is correct that people are born with an innate system of concepts which yield analytic connections, this fact will still leave him and Quine in agreement on some central points.  Quine was criticising how philosophers had used analyticity as a philosophically neutral explanation of a priori knowledge and necessary connections. He wanted to show that the only type of analyticity he could make sense of, stimulus analyticity, could not explain the philosophically meatier conceptions of the a priori and necessity. Chomsky’s alternative conception of analyticity, like Quine’s, does not purport to explain our grasp of necessary truths, nor any supposed a priori knowledge. So both thinkers are in agreement on this point at least. However, as we have already seen, Quine was not only arguing against analyticity in the sense of an explication of the a priori and necessity. He was also arguing against the semantical notion of analyticity. So one clear area of disagreement between both thinkers centres on whether there are analytic connections in the sense of sentences whose truth is fixed purely in terms of meaning independent of extra-linguistic fact? Quine gave arguments against this in the first four sections of ‘‘Two Dogmas’’, and these are arguments which Chomsky never managed to answer. Furthermore, Chomsky has offered little evidence in support of his positive conception of analyticity. So it is clear that Chomsky has not really answered Quine’s objections against the idea of an analytic/synthetic distinction.

Since Chomsky’s conception of innate analytic connections in natural language and Quine’s conception of stimulus analytic connections rest on such radically different pictures of how language is learned it would be foolish to say that they are in agreement on the nature of analyticity. However, it is significant that both thinkers are in general agreement as to the fact that analytic sentences do not explain a-priori knowledge, or necessary truths. So the debate between them on analytic sentences is an illusory one when it comes to metaphysics and epistemology. There is, however, a substantive debate between the two thinkers as to the nature of analytic sentences as a part of natural language and I will return to this point later in the essay.

 


                     SECTION 2: FACULTY PSYCHOLOGY AND ANALYTICITY

 

Chomsky has at various different times spoken about the language faculty, a science forming faculty, a folk physics faculty, and a moral faculty. To say that we have a faculty is to say that we have domain specific knowledge of something, and that this knowledge has not resulted from a general inductive procedure. A clear indication of a faculty operating in the brain is when as result of a disease or an accident, a person loses a certain ability (e.g. linguistic ability) but their general intelligence remains unimpaired. So, for example, people such as Marc Hauser have argued for the existence of a moral faculty based on the fact that selective brain damage results in people losing their moral sense but retaining their overall general intelligence. Likewise, Chomsky has discussed cases of people losing their linguistic abilities without it having any effect on their overall general intelligence.

Another key reason to believe that some abilities are the result of a faculty is that the ability arises at set times in the person’s development. Furthermore the ability arises in people at more or less the same time, independently of their general intelligence. Again Hauser argues for the existence of a moral faculty based on the fact that children uniformly answer tests in a similar way when at the same age of their development.

There is strong evidence that children come equipped to the world with intuitive folk physics and folk psychology. The fact that these two different domains of knowledge are not commensurate indicates that they may belong to different faculties of the mind. And there is independent evidence that folk physics and folk psychology are faculties of the brain. A person’s folk psychology does develop at set times in the child’s development independently of general intelligence. So, for example by the age of four children can typically pass the false belief test with the notable exception of autistic children, most of whom fail the test at this age. The fact that autistic children fail this test is instructive because they typically show no deficit in general intelligence. Furthermore, children who do in fact have a lower IQ, such as children with Down syndrome, manage to pass the test by the age of four. This shows that we may be justified in claiming that folk psychology is a distinctive faculty of the human brain.

Overall I would claim that there is some reasonable evidence (though far from overwhelming) to support the postulation of a faculty for folk physics, folk psychology, morality, language, amongst others. When Chomsky claims that humans are born with innate concepts which will automatically yield analytic connections he fails to detail the nature of these concepts, and whether these concepts are unique to particular faculties. Consider the following purported analytic statements: (1) All ducks are animals. (2) If John persuaded Bill to go to school then Bill decided/intended to go to school and did so without being under duress.  (3) Three times five is half of thirty.  It is a fair bet that most people would consider these statements to be analytic, though it must be admitted that some people may answer 3 by saying it is a priori synthetic. Let us assume for a minute that most people would consider all three sentences analytic. The question is: where, for a Chomskian, does their analyticity derive?

The subject matter of 1 is biology, 2 folk psychology, 3 is mathematics. All three statements are, to use the traditional definition, true by virtue of the meaning of the terms and not because of extra-linguistic facts. For Chomsky, as we have seen, the meaning which creates these analytic connections is something we are born with, not something which is true by convention. The three different statements contain concepts of agency, number, and biological entities. What we do not know is if those concepts grow in the brain separately from the different faculties, or whether they grow internal to the different faculties. So, for example, if a mathematical truth such as 2+2=4 grows internal to a faculty of mathematics, then damage to this faculty would not have any significance for our ability to understand analytic truths which grew internal to other faculties. This is a question which Chomsky has not in general been very clear on.  He correctly notes that more empirical research is required before we can decide on what the correct answer to the question is, and that it is empirical research that is necessary, not a priori philosophical speculation.

Whatever the exact structure of the various faculties are, Chomsky thinks that if there are analytic connections in different domains they will emerge internal to the faculties. Chomsky is explicit on this point in a personal correspondence with me:

 

Given my language faculty, it’s a necessary truth that “book” rhymes with “took” and that “chase” entails “follow.” Horwich is quite right.  This has no bearing on, say, whether arithmetic truths are synthetic a priori… On the Hume questions, we have to distinguish what our innate sense of number determines, and what is the truth about numbers (if we regard them as entities that exist independently of our cognitive capacities, as Platonic entities of some sort).  The two presumably coincide, but we can imagine an organism in which they would not (and maybe that’s us, as a matter of fact). (Chomsky: personal communication)

 

One important point to note about this communication is that he here admits a kind of scepticism about the truths of mathematics. He claims that while it is possible that our innate number sense does correspond to mind-independent Platonic entities, we cannot say for certain that they do. Now, given that our best scientific theories about the world are couched in mathematical formulae, this is a startling admission. Chomsky is claiming that we have no justification for acting in accord with our innate number sense, and that we have no way of knowing whether our innate number sense corresponds with any facts in the objective world.

Furthermore, Chomsky is arguing that different types of analytic connections will emerge internal to our various different faculties. So we will have analytic truths derived internal to our innate number sense, and analytic truths derived internal to our language faculty etc. This means that Chomsky’s view on the nature of analyticity is parasitic on his views on the various different faculties of the mind. It also shows that truths which are typically grouped together by philosophers as analytic derive their analyticity internal to particular faculties. So Chomsky’s justification for arguing that analytic truths exist depends on accepting his controversial views on (1) innate concepts, (2) faculty psychology, (3) potential radical scepticism about the truths of mathematics. Chomsky’s conception of analyticity seems to ask more questions than it answers.

                   SECTION 3: CHOMSKY AND QUINE ON ANALYTICITY

In the previous sections we established that neither Chomsky nor Quine have a conception of analyticity which is rich enough to provide a priori justification and evidence of necessity. However, I also noted that, despite having this much in common, their views on the nature of analyticity diverge radically. In this section, I compare the two views and consider which view deals with the available data better.

Having already explained Quine’s negative critique of analyticity at the beginning of this chapter, I will not repeat the material here. In this section I will focus on how Quine explains away people’s intuitions of analyticity.

As is well known, Quine criticised the notion of analyticity as it was used by philosophers from Descartes to Ayer. Quine, of course, did not rest with such negative criticisms; he also explained why people held such intuitions of analyticity in the first place. Quine was well aware of experimental work which showed that people had an intuitive conception of analyticity. However, for Quine, such intuitions do not provide evidence that analytic connections actually exist in natural language. He notes that analytic intuitions typically set in where people have difficulty figuring out what the person who denies the truth of such a sentence may mean by its denial. This reaction is perfectly consistent with Quine’s web of belief story. A person who denied something which is deeply embedded into our web of belief would indeed result in an interpreter having difficulty figuring out what the person meant by the denial.

We saw above that Putnam criticised Quine’s blanket criticism of the notion of analyticity. Putnam correctly argued that the web of belief story does not account for trivial cases of analyticity. In Word and Object Quine told a story about analyticity which, he argued, was largely consistent with the story told by Putnam.

Quine noted that terms like ‘‘Indian nickel’’ and ‘‘Buffalo nickel’’ are not stimulus synonymous while ‘‘bachelor’’ and ‘‘unmarried men’’ are. Quine explains this disparity by noting how the different terms are learned. He argues that terms like ‘‘Indian nickel’’ and ‘‘Buffalo nickel’’ are learned by association with non-verbal stimuli. Terms like ‘‘bachelor’’ and ‘‘unmarried men’’ are learned by association with verbal stimuli. So, for example, one learns the meaning of ‘‘bachelor’’   by learning to associate it with ‘‘unmarried man’’. Obviously from a social point of view the stimulus meaning of the term ‘‘bachelor’’ will vary from person to person. A child can be told the meaning of the word ‘‘bachelor’’ by ostensive definition; however, obviously, the particular person who is being pointed to will not be representative of the class of people in the world who are bachelors. So a child who learned the meaning of ‘‘bachelor’’ this way would be unable to generalise the term using this technique. Even if they pointed to a vast number of different bachelors and named them the child would not be able to generalise the term because there are no particular characteristics shared by bachelors alone other than being unmarried men. If an adult wants to teach a child what a bachelor is he can do no better than tell the child that bachelors are unmarried men. In the case of the terms ‘‘Indian nickel’’ and ‘‘Buffalo nickel’’ things are different; we learn these terms through ostension.  If we learn that ‘‘Indian nickel’’ and ‘‘Buffalo nickel’’ pick out the same object, we again do this inductively. Quine argues that some technical terms of science do not have a socially fixed stimulus meaning to govern how they are used. In this sense, Quine argues that they are like terms such as ‘‘bachelors’’ and ‘‘unmarried men’’. There is, however, one difference between the technical terms of science and terms like ‘‘bachelor’’; ‘‘bachelor’’ gets its meaning by being connected to ‘‘unmarried man’’, while in the case of technical terms their meaning is governed by their relation to a whole scientific theory.

Quine’s discussion of analyticity above is largely consistent with the story told by Putnam. Quine argues that a sentence like ‘‘All bachelors are unmarried men’’, appears to be analytic because we learn the meaning of ‘‘bachelor’’ through associating it with ‘‘unmarried men’’. It is not actually analytic in the sense of being immune to revision, nor does it provide us with any kind of a priori justification. The sentence merely has the appearance of being analytic because of its mode of being learned. As language evolves the sentence ‘All bachelors are unmarried men’ may no longer be considered analytic though in terms of present usage it certainly gives the appearance of being analytic. Likewise Quine’s web of belief story explains our grasp of less trivial sentences which have the appearance of being analytic, though as in the ‘bachelor’ case, he again accepts that such sentences are subject to revision as our language and theories evolve.

Like Fodor, and unlike Putnam, Quine does not beg any questions because he is not trying to explain the existence of analytic truths by using one-criterion concepts.  Quine is merely arguing that since our manner of learning some concepts will be through one-criterion concepts this explains why some people have analyticity intuitions. Ultimately, Quine argues that there is no sensible way of explicating an analytic/synthetic distinction. For this reason he argues that we have no reason to believe that such a distinction exists in language. He acknowledges that people have an intuition of analyticity but he explains away this intuition through his notion of one-criterion concepts.

Overall then Chomsky and Quine are agreed on certain points. Neither thinker argues that any sense can be made of the notion of analyticity as a defence of a priori justification, or of our supposed grasp of necessary truths. Both thinkers, however, do manage to offer at least some explanation of why people have an intuition that a distinction between analytic and synthetic truths exists. Chomsky’s explanation of analyticity is sketchy and vague and has little evidential support. He claims that people are born with innate concepts which as a matter of fact yield analytic connections; however, he does not offer empirical evidence to support this claim. Furthermore, his claims about analyticity commit him to the view that analytic connections exist in different forms in different faculties. So his purported explanation of analyticity relies on prior commitments to faculty psychology, and while this commitment may turn out to be justified, it is very much an open question. More importantly, his views on the analyticity of mathematics yield a type of scepticism which cannot be taken seriously. Quine’s arguments against the notion of an analytic/synthetic distinction were not refuted by Chomsky. His explanation of why people have an intuition of analyticity relies on a conception of language acquisition which he has not shown to be correct. The question whether Quine’s conception of people’s intuitions of analyticity is correct or not is parasitic on whether his conception of language is correct or not. At this moment in time the evidence does not indicate whether Quine is correct on this point or not. What is clear is that Chomsky’s positive arguments for the existence of an analytic/synthetic distinction have little evidence in support of them.

 

CHOMSKY AND QUINE ON ANALYTICITY PART 1

    CHOMSKY AND QUINE:THE ANALYTIC AND THE SYNTHETIC: PART 1

                                SECTION 1: THE OBJECT OF ATTACK

             Despite the fact that Chomsky and Quine are both methodological naturalists, they nonetheless disagree on a number of issues.  In this blog I will consider one such disagreement: their differing views on the status of the analytic/synthetic distinction. I will show that this dispute between them is an entirely naturalistic dispute. Traditionally, philosophers who used analyticity in their theorising did so in the service of a non-naturalistic philosophy. Such thinkers used analyticity as an explanation of non-naturalistic notions such as a-priori knowledge and our grasp of metaphysical necessity. One of my main reasons for discussing the dispute between Chomsky and Quine on analyticity is to illustrate a disagreement between both thinkers which, despite superficial appearances is entirely naturalistic. I will evaluate the evidence put forth by both thinkers and show that Chomsky has not offered sufficient evidence to justify his claim that analytic connections exist in natural language.

Quine’s attack on the analytic/synthetic distinction in his Two Dogmas of Empiricism is probably one of the most famous papers in the history of analytic philosophy. However, while the paper has been extremely influential, it has long been noted that the object of Quine’s attack was unclear even to Quine himself. In Two Dogmas Quine seemed to be attacking three different sources (1) Truth in virtue of meaning; (2) Necessary Truths; and (3) A Priori Knowledge. This fact was understandable because at the time Quine wrote Two Dogmas most philosophers believed that an analytic truth was a truth which was both an a priori truth and a necessary truth. Logical Positivists had argued that the only sensible way to account for supposedly a priori truths or necessary truths without postulating some mysterious faculty of intuition was to do so in terms of analyticity. So it is understandable that Quine would have believed that by attacking analyticity he was also attacking the a-priori and necessity.

Post Saul Kripke few philosophers would conflate analyticity, necessity, and the a-priori. It is worth bearing in mind the above distinctions when evaluating Quine’s arguments because doing so will help us to key in on what his arguments are directed towards, and on whether his arguments are effective. The a-priori/a-posteriori distinction is an epistemological distinction. The necessary/contingent distinction is a metaphysical distinction. The analytic/synthetic distinction is between purely semantic truths and truths which rely on semantic facts and extra-linguistic information. In the 1950’s, Quine, like most philosophers, believed that these distinctions lined up. He furthermore believed that, since analyticity was the only way of explicating the a-priori and necessity, then by attacking analyticity he was attacking the other distinctions. Quine was not merely concerned with attacking analyticity in so far as it was used as an explanation of the a-priori and necessity, he was also concerned with attacking the very notion of a purely semantic truth. He believed that truth by virtue of the meaning of the terms was a senseless notion. I will now discuss his actual arguments against analyticity in Two Dogma’s focusing firstly on his arguments against the semantical notion of analyticity.

 

               SECTION 2: TWO DOGMAS OF EMPIRICISM: A SUMMARY

When Logical Positivists spoke of analytic truths they included truths which we would group as necessary truths, a priori truths, and purely semantic truths. Under the banner of analytic truths one would find philosophically important notions such as truths of logic, truths of geometry and arithmetic, as well as trivial semantic truths. So, for example, one would find complex mathematical statements grouped under the same category as a trivial statement such as ‘All bachelors are unmarried men’. Quine who was working within this logical positivist tradition grouped trivial analytic statements together with more philosophically important analytic truths.

In the first four sections of ‘‘Two Dogmas’’ Quine was critiquing the very notion of an analytic truth. Quine argued that we could make sense of what he called Logical Analyticity, an example of which is (1) ‘All bachelors are bachelors’. However he stated that he could make no sense of analytic truths which results from substituting synonyms for synonyms, for example (2) ‘All bachelors are unmarried men’. We will call analytic truths derived by substituting synonyms; Meaning Analyticity. Quine argued against Meaning Analyticity because he could find no cogent way of explicating what it amounted to. Any attempt to define meaning analyticity relied on using notions which themselves could only be made sense of if we helped ourselves to the prior notion of meaning analyticity. However, since analyticity was the very notion which we were trying to define, if the other terms which we use to define it themselves rely on the notion of analyticity to be made sense of, then our explanation will be circular. Quine’s objection to the notion of analyticity is usually called his ‘circle of terms’ argument. We cannot define analyticity without terms such as ‘‘synonymy’’, ‘‘necessarily’’, ‘‘meaning’’ etc. and these terms can only themselves be defined in terms of the ill-understood notion of analyticity. So Quine argued that, since we cannot define analyticity in a non-circular manner and the notion makes no sense behaviourally, then we have no justification to argue that analytic sentences, as traditionally conceived, exist.

The primary aim of the first four sections was to criticise the notion of meaning analyticity. In these sections Quine was concerned with attacking the notion of purely semantic truth. In the final two sections Quine is attacking what he takes to be the dogma of reductionism. He begins by attacking the verification theory of meaning. He notes that, according to the verification theory of meaning, the meaning of a statement is the method of confirming or infirming it (1951,37). He then notes that an analytic statement is that limiting case which is confirmed no matter what (ibid., 37). Quine claims that it is possible to argue for a conception of analyticity using the verification theory of meaning. One could say that two statements are analytic if they have the same conditions of verification.

Quine rejects the definition analyticity in terms of conditions of verification because of deficiencies which he sees in the verification theory of meaning. He famously argued that the unit of empirical significance is the whole of science (ibid., 42). He pointed out that all empiricist attempts at reduction had failed. Furthermore, he noted that such failures were failures in principle; all such attempts relied on undefined terms to construct their reduction. It was because of this failure that Quine argued for his famous web of belief.  When cashing out the nature of the web of belief Quine claimed that: ‘our statements about the external world face the tribunal of experience not individually but as a corporate body’ (ibid., 46).

An important consequence of this holism is its effect on the nature of falsification of our scientific theories. Since our total theory of the world is an interconnected web of belief, and this web faces the tribunal of experience as a corporate body then a falsification of a prediction made by the theory will reverberate throughout the theory. However, Quine notes that if we are faced with a supposed falsification of a prediction, how we modify the theory which made the false prediction is to be dictated by pragmatic considerations.  So, for example, since mathematical and logical principles are so deeply connected to every area of our scientific theories they will be the last thing that we will modify if our theory makes a false prediction. Nonetheless there is no reason in principle that we cannot revise mathematical or logical laws if it seems the best way to accommodate recalcitrant experience.

Laws of mathematics have traditionally been viewed as being special in that they could not be explained empirically[1].  Kant claimed that mathematical laws were a priori synthetic truths, because, while they were a priori true, they were not analytic truths. An analytic truth, was for Kant, a statement where there was nothing in the predicate concept that was not already contained in the subject concept. However, Kant noted that in the case of mathematical truths we could not explicate them in terms of analyticity. So, for example, if we consider ‘2+2=?’, according to Kant, no analysis of preceding concepts will give us the answer ‘4’. Kant claimed that if we want to discover an answer in maths we have to construct it in intuition, merely analysing concepts will get us nowhere.

Logical Positivists argued, contrary to Kant, that mathematical truths could be reduced to logical truths. They recognised two types of truth, analytic truth and synthetic truth. Mathematical truths, like truths of logic, were viewed by them as analytic truths.

Quine’s web of belief argument, with its emphasis on the fact that mathematical and logical truths are revisable in principle but in practice are rarely ever revised, is pertinent to the considerations of Kant and the Logical Positivists. To explain the special status of mathematics Kant had to postulate a mysterious kind of knowledge (a priori synthetic knowledge). Logical Positivists tried to explain mathematics in terms of logic which was in turn explained by linguistic convention. Quine demonstrated that the doctrine of truth by convention was dubious because in order to consistently apply any logic we would need to presuppose a logic prior to such an application (1934). While Kant’s postulation of a priori synthetic knowledge was never made sense of in a non-mysterious way, Quine’s web of belief picture explained why mathematical and logical truths seemed to have a special status in our overall theory of the world. Furthermore, it managed to avoid the difficulties which plagued the pictures of the Positivists and Kant.

Quine’s web of belief argument shows that we do not need to appeal to analytic truths in the sense of truth in virtue of meaning to explain the apparent specialness of mathematics and logic. It furthermore shows that we do not need to appeal to a-priori synthetic knowledge to explain what is apparently special regarding mathematics. Quine’s picture of mathematics and logic being deeply embedded into our total theory of the world explains their apparent specialness adequately. Furthermore, by admitting that basic laws of mathematics and logic can be revised, he is making sense of the history of science in a way that people like Kant do not.

However, as Putnam pointed out in his paper ‘‘The Analytic and the Synthetic’’, while Quine’s argument for the web of belief is persuasive as an explanation of  non-trivial analytic sentences, it is far from persuasive when it comes to trivial analytic sentences. While it is sensible to explain away truths of mathematics conceived as analytic truths by appeal to them being deeply embedded in our web of belief, it is not sensible to claim that a trivial statement such as ‘All bachelors are unmarried men’ is deeply embedded into our web of belief. I will explore this weakness with Quine’s explanation of analyticity later in Section 4. Firstly I will explicate Chomsky’s difficulty with Quine’s claim that there is no sensible distinction to be drawn between analytic and synthetic sentences.

SECTION 3: CHOMSKY’S DEFENCE OF THE DISTINCTION

It appears, then, that one of the central conclusions of modern philosophy is rather dubious: namely, the contention- often held to have been established by the work of Quine and others- that one can make no principled distinction between questions of fact and questions of meaning, that it is a matter of more or less deeply held belief. (2000b, 63)

 

As a linguist, Chomsky was primarily with whether an analytic/synthetic distinction was, as a matter of empirical fact, a distinction that ordinary speakers of natural language recognised. He argued contra Quine that analytic connections do exist in natural language, and that it is the job of the cognitive scientist to study such connections. As in the case of syntax, we begin with people’s intuitions; in the case of analytic connections, we need to test whether people can reliably distinguish sentences which are analytic from sentences which are synthetic.  So the question arises: would an ordinary speaker of English be able to distinguish between an analytic sentence like ‘All Bachelors are Unmarried men’ from a synthetic sentence like ‘All Bachelors like Seinfeld’? For Chomsky’s claim that analytic connections are a fact of natural language to be shown to be correct, experiments need to be conducted. However, experimental research on the analytic/synthetic distinction has not exactly been overwhelming. There have been some experiments on the distinction, for example: Apostel, L., W. Mays, A. Morf, and J. Piaget. Les liaisons analytiques et synthetiques dans les comportements du sujet. Arne Naess. Interpretation and Preciseness, as well as Katz and Fodor’s paper, ‘‘The Structure of a Semantic Theory’’. These studies do imply that people have a gradualism of statements that they find intuitively synonymous. And this view seems to support Chomsky’s claim that people do have intuitive conceptions of what sentences in natural language are analytic and what ones are synthetic.

Chomsky argues that since the experiments show that people intuitively distinguish between analytic and synthetic sentences, then we need to explain this fact. His explanation centres on poverty of stimulus considerations. He argues from the speed that children acquire words (about 12 words a day at peak periods), and the incredible complexity of the words acquired, to the claim that the children must be merely labelling concepts they are born with. From his argument for innate concepts, he draws the following conclusion:

This would appear to indicate that the concepts are already available, with much or all of their intricacy and structure predetermined, and that the child’s task is to assign labels to concepts, as might be done with limited evidence given sufficiently rich innate structure. And that these conceptual structures appear to yield semantic connections of a kind that will, in particular, induce an analytic-synthetic distinction, as a matter of empirical fact. (2000b, 62)

 

On the face of it, this argument simply does not work.  Chomsky’s poverty of stimulus argument for concepts relies on the assumption that when children acquire words this ability indicates a grasp of a complex concept which the word means. So, for example, it implies that when a child acquires a word such as ‘Mama’, the child understands the same complex concept which adults do when they explicate what the concept MAMA means. Chomsky has never provided any evidence to support this wildly implausible claim. If we take away the unproven claim that the words children are learning at peak periods of language acquisition express full-blown adult concepts, then the acquiring of 12 words a day is less impressive than Chomsky claims. If a child using a word such as ‘mama’ had only a simple understanding of what it meant, then the fact that the child was learning such words so fast would not seem so miraculous. And there would therefore be no need to postulate innate concepts to explain the speed of word acquisition.

While there is some evidence that children from as young as four months of age demonstrate some conceptual understanding of concepts such as OBJECTS, CAUSATION and AGENT, this evidence does not support Chomsky’s more radical claims about concepts. I will discuss the evidence for children’s conceptual abilities in chapter three; here I will merely note that it is at best suggestive. The evidence does show that children have some concepts prior to learning language. However it does not conclusively demonstrate that these concepts are innate, nor that children possess most of their concepts prior to learning language. Chomsky is making much more radical claims than that some abstract concepts such as object, or causality are innate. In a personal communication with me he made the following claims:

Also, there is good evidence that innateness of concepts goes far beyond the philosophically interesting examples that you mention.  It’s hard to imagine how else people could acquire the meanings of the simplest words in the language — “river,” “tree,” “person,”….  They have rich properties for which there is no empirical evidence for the child. (Chomsky: Personal Communication)

 

Furthermore in his New Horizons in the Study of Language and Mind, Chomsky made even more radical claims about the innate conceptual abilities of Children. Chomsky considered an argument from Putnam (1988) that evolution could not have equipped us with an innate set of concepts including ones like Carburettor and Bureaucrat. Putnam argued that evolution could not have equipped us with these concepts because in order to do so evolution would need to be able to anticipate all future contingencies. Chomsky replied to this by noting that Putnam’s argument is incorrect because we do not need to assume that evolution anticipated all possible contingencies, just the particular ones under question.

Chomsky claimed that a similar argument to Putnam’s was used in immunology and  that recent work by Niels Kaj Jerne challenges this idea:

Notice that a very similar argument had long been accepted in immunology: namely, the number of antigens is so immense, including every artificially synthesized substances that had never existed in the world,  that it was considered absurd to suppose that evolution had provided an ‘innate stock of antibodies’, rather formation of antibodies must be a kind of ‘learning process’. But this assumption might well be false. Niels Kaj Jerne won the Nobel Prize for his work challenging this idea… (ibid, 65)

 

This analogy proves nothing, it is not in doubt that it is possible that people are born with an innate stock of concepts like CARBURETTOR and TREE. What is in doubt is whether there is any evidence that supports the claim that such concepts are innate. Chomsky has provided none whatsoever here. Chomsky vaguely gestures in the direction of poverty of stimulus considerations to support his conjecture about innate concepts:

Furthermore, there is good reason to suppose that the argument is at least in substantial measure correct even for such words as Carburettor and Bureaucrat, which, in fact, pose the familiar problem of poverty of stimulus if we attend carefully to the enormous gap between what we know and the evidence on the basis of which it is known. (2000b, 65)

 

Here Chomsky is using a poverty of stimulus argument similar to the argument he uses in the area of syntax. I will discuss the poverty of stimulus argument in the area of syntax in chapters two and four. The poverty of stimulus argument in syntax has the merit of being precise and testable. Chomsky has provided no such explicit model of a poverty of stimulus argument for innate concepts. So we have no reason to believe Chomsky’s radical claims about innate concepts until more evidence is provided.

It is of course possible that Chomsky is correct that children are born with innate concepts such as: CARBURETTOR, TREE, BUREAUCRAT, RIVER, etc.; however an incredible amount of evidence is needed to support such an incredible claim. Chomsky vaguely points to poverty of stimulus considerations. However, he provides no details of where the supposed gap between the knowledge children display of the concept CARBURETTOR and the data the child receives when learning the concept lies. So until he spells out his poverty of stimulus argument in more detail his claims on these matters should be viewed with extreme scepticism.

More importantly, even if Chomsky did provide evidence that such concepts are innate, it would not follow that they will automatically yield analytic connections. Jerry Fodor has long agreed with Chomsky that most of our concepts are innate; however he does not think that analytic connections exist in natural language. It is worth briefly considering Fodor’s views on innateness and analyticity because doing so will help us better understand Chomsky’s views.

Fodor agrees with Chomsky that most of our concepts are innate. However he does not use the same arguments as Chomsky to reach this conclusion. While Chomsky’s poverty of stimulus argument points to a gap between what we know and the inability of the data of experience to account for our knowledge, Fodor argues that because of certain facts about the structure of concepts, all known theories about how concepts are learned are incorrect in principle.  He claims that there are two different ways of accounting for our knowledge of concepts. The first one is to say that most of our concepts are definitions which are defined in terms of primitive concepts. The primitive concepts are either defined as sensory primitives such as RED, SQUARE etc., or as abstract concepts such as CAUSATION, AGENCY, and EVENT etc. What the primitive concepts are will depend on the nature of the theory being expounded. So for example, the empiricist philosopher David Hume argues that the primitive concepts are sensory experiences. While the linguist Stephen Pinker argues that the primitive concepts are abstract concepts such as CAUSATION, AGENCY, and OBJECT which we are born with knowledge of.  Fodor thinks that Pinker, Hume and all definitional theorists are wrong because concepts cannot be explicated in terms of definitions.

The other theory of concept acquisition thinks that our concepts are basically prototypes which are learned statistically. Fodor argues against this view because he thinks that the prototype theory cannot account for the fact of compositional concepts. I will not here consider Fodor’s arguments against prototype theory because they are not relevant to the concerns of this thesis. Furthermore I will not consider the fact that Fodor has lately retracted his views on radical nativism because again this is not relevant to the overall argument of the thesis. I am only concerned with explicating why Fodor does not follow Chomsky in arguing that analytic connections follow from the fact of innate concepts. Fodor’s reason for not believing that analyticity follows from innate concepts is that he does not think that innate concepts have internal structure. His argument that concepts do not have internal structure is derived from his belief that concepts are not definitions. Fodor essentially uses three different arguments against the claim that concepts are definitions:

(1) Lack of definitions: In general we have not found any definitions for concepts. There are millions of concepts and very few definitions (maybe fifty or so). (1998, 46)

(2)Developmental argument: All concepts cannot be definitions.  Some of the concepts must be primitive. The concepts which are definitions must be defined partly in terms of the primitives. This being so we must assume that there is a developmental stage when children only know primitive concepts. However we have no evidence to support this claim. Therefore, concepts cannot be definitions. (ibid, 47)

(3) Production argument: If concepts were definitions, then sentences which contained complex concepts would be harder to produce than those which contained simpler concepts. The reason is that a complex concept would take longer to call to mind the definition of when speaking the sentence. However the experimental evidence indicates that the production of sentences involving complex concepts takes no longer than the production of sentences which contain less complex concepts. (ibid, 49).

Fodor believes that his arguments against definitions are decisive. He claims that without definitions, concepts have no internal structure, and furthermore if concepts have no internal structure, then there are no analytic sentences. Kant famously defined an analytic sentence as a sentence in which the predicate is entirely contained within the subject. However obviously if a concept has no internal structure, then nothing can be contained within it. For this reason, Fodor thinks that there are no analytic sentences. He notes that one of the main arguments for definitions is that they explained people’s felt intuitions of analyticity.  So this gives us two options: (1) accept that people’s intuitions of analyticity are a reason to believe in conceptual connectedness; (2) explain away the intuition. Since Fodor has offered reasons to deny that there are complex concepts, he opts for number (2), to explain away the intuitions of analyticity.

 

                  EXPLAINING AWAY ANALYTICITY INTUITIONS:

As we saw above Quine explained away our intuitions of analyticity in terms of his web of belief picture.  This worked for supposed non-trivial analytic sentences like F=MA. However it was less plausible as an explanation for apparently more trivial analytic sentences like ‘‘All bachelors are Unmarried men’’.  We cannot plausibly explain away cases like ‘‘All Bachelors are Unmarried men’’ by saying that they are deeply embedded into our total web of belief.

As we have already seen, Putnam claimed that Quine’s argument worked as a criticism of the a priori but not as a criticism of apparently trivial analyticity. He claimed that a sentence such as ‘‘All Bachelors are Unmarried men’’ is an analytic sentence, and its analyticity derives from the fact that it is a one criterion concept. The concept BACHELOR is not according to Putnam connected to any other concepts in our web of belief other than to the concepts UNMARRIED and MEN. Putnam argues that sentences which people intuitively find analytic but which are not deeply connected to our web of belief, can be explained by his notion of one-criterion concepts. Fodor disagrees with this explanation because he feels that it is a circular argument. In order to define which concepts are learned by one criterion, you need to know that it is analytic already. So your explanation of analyticity presupposes the existence of the very thing we are doubting. So we are back to Quine’s circle of terms again.

Fodor offers a different account of one-criterion concepts which he thinks is less problematic than Putnam’s is. Fodor agrees with Putnam that one-criterion concepts exist. He furthermore thinks that he can use these one-criterion concepts to explain people’s intuitions of analyticity. However, Fodor argues that since he is not using one-criterion concepts to explicate analyticity, then his story is not circular in the way that Putnam’s is. Here is Fodor’s story in a nutshell:

Suppose you think the only epistemic route from the concept C to the property it expresses depends on  drawing inferences that involve the Concept C*.Then you will find it intuitively plausible that the relation between C and C* is conceptual; specifically, that  you can’t have C unless you have C*. And the more you think that it is counterfactual supporting that the only epistemic route from C to the property it expresses depends on drawing inferences that involve the concept C*, the stronger your intuition that C and C* are conceptually connected will be. (Ibid, 83)

 

Fodor argues that people’s intuitions of analyticity are really just epistemic intuitions which are confused for semantic connections. So for example I can only know that someone is a bachelor by knowing that he is an unmarried man. According to Fodor, I know that all bachelors are unmarried men because of epistemic access to the facts in the world, not because of the internal structure of concepts. He holds that people’s intuitions of analyticity are illusions which result from conflating epistemic properties with semantic properties.

So we can see from Fodor’s argument that analyticity does not follow from innateness. If the innate concepts are unstructured, then they will not automatically yield analytic connections. Chomsky assumes that the children’s innate concepts are structured and as a result will automatically yield analytic connections. Furthermore unlike Fodor and Quine, he takes it for granted that people’s intuitions of analytic connections are good evidence that such connections exist.

In a personal communication I asked Chomsky why he believed that analyticity followed from innateness:

You’re quite right that it isn’t obvious, but I think it is correct.  It’s an empirical issue, in principle, but a hard one to investigate — and like other empirical questions, certainty is unattainable, just a high degree of plausibility.  Personally, I think that serious inquiry would reveal that “tomorrow is two days after yesterday” or “my uncle is male” or “nightmares are dreams” and much else would withstand the harshest tests, and I don’t think that Quine or Fodor have suggested any reason to doubt these expectations. (Chomsky: Personal Communication)

 

Chomsky’s reply is instructive because he simply claims that certain constructions are obviously analytic and that any future tests would confirm this belief. He does not answer the question about why analyticity follows from innateness. He merely claims that it is in principle an empirical question. Presumably his reasoning is that people do recognise a distinction between analytic and synthetic sentences, and he believes that people are born with a massive amount of innate concepts.  From this he believes it follows that the structure of these innate concepts are our best explanation for our intuitions of a distinction between analytic and synthetic sentences. However this vague argument relies on an unproven claim about innate concepts, and a further unproven claim about the structure of these purported innate concepts. So Chomsky thus far provides no real explanation for people’s felt intuitions of a distinction between analytic and synthetic sentences.

Chomsky appeals to the fact that certain sentences are obviously analytic, presumably he is here claiming that such sentences are intuitively felt to be analytic. However obviously the fact that people have intuitions of analyticity does not tell us whether Fodor’s story is correct or if Chomsky’s is.

Since Chomsky has offered no compelling evidence to support his claim that humans are born with innate concepts, a debate between him and Fodor on whether innate concepts have internal structure is a nonstarter. Chomsky has offered no evidence for innate concepts, so he cannot support his views on analyticity by appealing to innate internally structured concepts. Fodor has offered some arguments for innate concepts[2], however they offer little support to Chomsky’s position.

It is of course open to Chomsky to use Fodor’s arguments for innate concepts; however a vital part of Fodor’s argument, is that concepts are not definitions, so they have no internal structure. So if Chomsky did use Fodor’s arguments for innate concepts, he would be left in a situation where he could claim that concepts are innate but have no internal structure, so do not yield analytic connections. Either approach leads to the conclusion that analytic sentences do not exist. So Chomsky has offered us no real reason to believe that we can derive analyticity from innate concepts.

We have shown that analyticity does not follow from innate concepts. It is worth noting that the converse is also true. If one is denying that innate concepts exist one is not thereby denying that analytic connections exist in natural language. Empiricists such as Ayer, Carnap and Hume who are not committed to the view that concepts are innate, nonetheless argue that analytic connections exist in natural language. Ultimately, then, Chomsky’s arguments for innate concepts are irrelevant to facts about analyticity unless he can provide evidence that such innate concepts are structured in such a way as to yield analytic connections. Thus far Chomsky has provided no evidence about the structure of these supposedly innate concepts, so we therefore have no reason to believe that innate concepts lead to analytic connections in natural language.

Independent of his unproven claims about innate concepts yielding analytic connections, Chomsky’s arguments for analytic connections in natural language focus on three primary facts. One is that ordinary language speakers can readily tell whether a sentence of natural language is true by virtue of the meaning of the words involved and extra linguistic fact, or whether it is made true entirely by facts of meaning. His second reason is that he has never seen a convincing counter example to clear cases of analytic sentences, such as ‘Tomorrow is two days after Yesterday’. And his third reason is that he thinks that particular sentences of a relational structure such as the sentence ‘‘If John killed Bill then Bill is Dead’’ are more difficult to find counter examples to than the simple examples focused on in the philosophical tradition such as ‘‘All Bachelors are Unmarried men’’.

Chomsky’s first claim that people intuitively recognise a distinction between statements which are synonymous and those which are not has been tested experimentally through the using of questionnaires. The small literature does seem to indicate that people have an intuitive sense of a distinction which loosely corresponds to what is traditionally called an analytic/synthetic distinction. However such studies involve showing people different sentences which are categorised into two different groups. The fact that people categorise in a way consistent with an analytic/synthetic distinction proves nothing. Such categorisation may reflect an epistemic categorisation rather than a semantic one. The various different studies do not distinguish between the Quine/Fodor story and the Chomsky story. The Quinean model would predict that people would have an intuitive sense of a distinction between truths like ‘‘2+2=4’’ and ‘‘Most dogs are less than six feet tall’’. The difference between Quine and Chomsky on the question of Analyticity is whether; a person’s intuitive sense of Analyticity can be explained best in terms of an innate system of concepts, or in terms of deeply embedded beliefs in a person’s holistic theory of the world. The weak point of Quine’s views on analyticity is that his web of belief story works well for epistemologically significant concepts such as ‘‘2+2=4’’ but not for ‘‘All Bachelors are Unmarried men’’ which cannot be reasonably claimed to be deeply embedded into our total theory of the world. However Putnam’s one criterion concepts as modified by Fodor works perfectly to capture trivial cases of analyticity. Fodor has shown that such cases can be viewed as epistemic knowledge confused with conceptual knowledge. The fact that people have intuitions of analytic connections in natural language is not that significant, such intuitions could be epistemic knowledge confused with conceptual truths. They can be explained away as intuitions of truths deeply embedded in to our web of belief, or one-criterion concepts.

Chomsky’s second reason is not very convincing, as philosophers have shown over the last fifty years that some statements which seem immune to revision are in principle revisable; see for example Putnam on the sentence ‘‘All Cats are Animals’’. Now Chomsky could reply to this that while some sentences which are analytic have been shown to be in principle revisable, the paradigm statements have not been shown to be revisable. If he were to argue so, in order to make his claim a testable one, he would need a list of paradigm cases of analytic sentences. He would then need to show how such sentences could withstand the probing of the most strenuous of tests. Chomsky has never done anything like this. His remarks on analytic sentences, remain just that, remarks; and remarks which are unproven.

Chomsky has also claimed that philosophers erroneously focus on analytic sentences with to simple a structure; he claims that sentences of a relational structure provide much more evidence for analytic sentences. To test Chomsky’s claims we will need to consider some examples of sentences of a relational structure and test whether they support his claims over Quine’s. He has used two examples over and over again, they are: (1) If John killed Bill, then Bill is Dead. (2) If John persuaded Bill to go to college, then John caused Bill to intend to go to college. Here Chomsky discusses these sentences:

Notice again that we appear to have connections of meaning in such cases as these; we have a rather clear distinction between truths of meaning and truths of fact. Thus, if John persuaded Bill to go to college, then Bill at some point decided or intended to go to college and did so without duress; otherwise, John did not persuade Bill to go to College. Similarly if John killed Bill, then Bill is dead (though John may or may not be depending on the facts). These are truths of meaning not of fact. (2000b, 62)

 

Sentences1 and 2 are examples of sentences with a relational structure. The use of the predicates ‘killed’ and ‘persuade’ in the antecedents will have direct  bearing on what words are allowed in the consequent. Take, for example, Sentence 1 above- Let us consider the sentence as an argument schema: If X killed Y then Y is….?.  What Chomsky seems to want to say here is that the only way to make the argument schema true is to fill ‘dead’ into the blank or something synonymous with ‘dead’. Of course such an interpretation is patently false as can be seen by the following; we can insert ‘is not going to play football next week’, ‘is not going to run a marathon next week’ etc.  and the truth value will remain invariant. So it could be claimed that the relational structure does not in any obvious sense indicate that, from the fact of John killing Bill, we have to infer that Bill is dead. However, upon closer inspection it becomes clear that anything we fill into the blank must be consistent with the fact that Y is dead. So, for example, if one were to offer the following to fill in the blank ‘is going to sing his favourite song tomorrow’, one would be constructing a false argument. And the reason is that singing a song is not consistent with being dead. The two attempts to fill in the blank above which result in true arguments work because obviously not doing something, is largely consistent with being dead. From this we can conclude that any blank we fill into the consequent must at least be consistent with being dead, however obviously it does not have to be synonymous with being dead. So if we consider ‘X killed Y so Y is -’, only something consistent with being dead can be slotted into the consequent and the truth value remain invariant. So any conditional which has an antecedent such as ‘Y being killed, so Y is…’ must have a consequent which is consistent with Y being dead or the conditional will be false.

If one puts ‘is dead’ into the consequent one gets an analytically true statement. Likewise, if one puts ‘is not playing tennis’ one also gets an analytic sentence. This construal of the argument leaves us in a situation where we cannot equate the analyticity with synonymy. Obviously the statement that ‘X killed Y’ is not synonymous with the statement that ‘Y is not playing tennis’, despite the fact that Sentence 1‘X killed Y so Y is not playing tennis’ is an analytic statement. Chomsky could argue that Sentence 1 is analytic because of the internal structure of the complex concept KILLED. He could argue that the complex structure of such concepts which we are born with will automatically tell us what inferences can be derived from it purely based on the meaning of the concept[3]. So our language faculty determines what arguments can be filled into the schema ‘X killed Y so Y is-’ in order to yield analytic connections. Here we have an argument which seems to show how we can go from an innate concept such as KILLED to construct analytic sentences based on the internal structure of the concept. However, such an argument again relies on assumptions about innate concepts which have not been proven correct. So if Chomsky wants to construct such an argument he needs to find evidence to support his belief in innate concepts such as KILLED. Independent of the argument from innate concepts all that is left is the claim is that some analytic statements are difficult to refute, but of course this claim is largely consistent with Quine’s claims about analytic statements being deeply embedded into our overall theory of the world.

So Chomsky’s claim that sentences of a relational nature provide stronger kinds of evidence for analytic connections in natural language than the simpler cases that philosophers typically talk about has been shown to be erroneous. His assumption that this is so relies heavily on his belief in innate concepts. The claim that sentences of a relational structure seem to offer cases of analytic connections can be explained in terms of how embedded notions like KILL and INTENTION etc. are to our overall theory of the world.

The above discussion shows that Chomsky has offered no compelling arguments which demonstrate that Quine’s critique of an analytic/synthetic distinction is incorrect. It is possible in principle that Chomsky’s positive views on analyticity are correct; however, we do not as of yet have any empirical evidence to support Chomsky’s claims on this matter. In the next blog I will try to explicate what type of analyticity Chomsky is committed to if we assume he is correct that analytic sentences exist in natural language as a result of certain constraints which are imposed by the concepts which we are born with. In the next section I will assume that Chomsky is correct on this point, and consider how much analytic connections being encoded in the brain of a subject meets the criterion of analyticity as traditionally conceived.


[1] John Stuart Mill was an exception to this rule.

[2] I will not here evaluate whether Fodors arguments for innate concepts work because it has no relevence to the argument of the thesis.

[3] For a detailed discussion of issues connected to the above discussion see Fodor 1970 ‘Three Reasons for Not Deriving Kill from Cause to die’, and Pietroski’s ‘Small Verbs, Complex Events, Analyticity without Synonomy’. The debate between Pietroski and Fodor would take us too far a field from the concerns of this thesis but IS worth reading in its own right.

Lacan and Cognitive Science Part 1

                                 LACAN AND COGNITIVE SCIENCE

Psycho-analysis has undergone a quiet revolution as an attempt has been made to (1) Make its structure more explicit and formal, see for example, Matte Blanco and Wilfred Bion and more recently (2) attempts to merge psychoanalysis with neuroscience. Psychoanalysts like Mark Solms, and Joseph Dodds have led the charge in creating the new field of Neuropsychoanalysis. Some neuroscientists have joined Solms and Dodds by doing neuroscientific studies which aim to test psychoanalytic theories. So, for example, Heather Berlin, Eric Kandel, and Jaak Panksepp have done a lot of neuroscience studies which aims to test the claims of psychoanalysis. These studies confirm and disconfirm different aspects psychoanalytic theory. The Oedipus complex has received little support from neuroscience, while things like repression have received ample support in neuroscientific tests. There is still a long way to go in attempts to bring psychoanalysis into the scientific fold.

Despite my above optimistic assessment there is still a lot of work to be done. The formal models of psychoanalysis that I mentioned above are not justified with enough behavioural data. Psychoanalytic theory in general does not pay enough attention to studies of developmental psychology and behavioural theory. Part of a psychoanalysts training involves detailed observation of children and analytic interpretation of their behaviour. This aspect of their training is vital for a discipline which sometimes involves explaining adult neurosis in terms of childhood trauma. However a serious difficulty with this approach is that when one sticks to pure observation it is possible to interpret the same data by applying different theories to that data. To know whether ones theory is the correct theory of the data, as opposed to an ad-hoc story which just happens to fit the data we need experiments. Experiments serve the purpose of testing our theories to see whether they hold up to stringent tests. If psychoanalysts want their theories to be accurate they need to construct experimental tests of their own or at the very least use the experimental evidence which child psychologists like Pinker, Soja, Carey, and Spelke et al. have done.

The above difficulties in psychoanalysis are easily rectified and when this is done I believe this will greatly help psychoanalysis to become more scientific. There is however a different challenge which faces attempts to make psychoanalysis more scientific. This challenge is more difficult to overcome.  Contemporary psychoanalysis is divided into sects. We have schools where the teachings of Lacan are followed, pure Freudian Schools, followers of Anna Freud’s theories, followers of Melanie Klein’s theories etc. Neuropsychoanalytic theory typically focuses on orthodox Freudian theory which lends itself well to neuropsychoanalytic testing. However there is no reason in principle that the theories of Klein, and Anna Freud cannot be subject to similar testing.  In fact in a 2013 talk ‘Anxiety One or Two?’ Yoram Yovell provided some surprising neuroscientific evidence which supports Melanie Klein’s paranoid/schizoid theory.

Of the above theorists that I mentioned there is one whose theories are less easily integrated with modern scientific theory; Jacques Lacan. This is a serious problem. Klein, Sigmund Freud and Anna Freud all disagree with each other on various different topics because their theories are testable as we learn more we can discover which of them (if any) are correct on various different topics which they disagree on. Since Lacan’s theories are less easily studied biologically this means that it is harder to test them. Karl Popper may have been wrong about a lot of things but he was surely right that a theory which is totally untestable is unscientific. So it is important that if Lacan is to be taken seriously that we try to discover what aspects of his theories are scientific.

Lacan has had his work denounced by a variety of serious thinkers. The three quotes below represent some dismissals which have been given of Lacan’s work by Cognitive Scientists and Evolutionary Psychologists:

“Jacques Lacan I actually knew. I kind of liked him. We had meetings every once in a while but quite frankly I thought he was a total charlatan, just posturing before the television cameras the way many Paris intellectuals do. Why this is influential I haven’t the slightest idea I don’t see anything that should be influential (Chomsky: Veterans Unplugged Podcast Dec 2012)”

“… Although Lacan uses quite a few key words from the mathematical theory of compactness, he mixes them up arbitrarily and without the slightest regard for their meaning. His ‘definition’ of compactness is not just false: it is gibberish. (Sokal and Bricmont: Intellectual Impostures)”

“With Lacan, matters were altogether different. The question of evidence was not even raised by his followers. Everything the great master wrote was taken on trust, as if it were holy writ. Everything Lacan said was right, just because he said it. Debate in Lacanian seminars was purely a matter of exegesis – what did the master mean by such-and-such a phrase? Nobody ever took the next logical step and asked – was he right? That was simply assumed. (Dylan Evans: From Lacan to Darwin, p. 6)”

The above quotes are interesting but a couple of points need to be made. Chomsky is a brilliant linguist and political theorist; however he has never shown any sign of engaging with Lacan in any serious way. So despite Chomsky being a very highly regarded thinker his views on the topic should not be accorded any more significance than say Stephen Hawkins’s uneducated views on philosophy. Sokal and Bricmont made some serious criticisms of Lacan demonstrating that his use of maths was nonsensical. However Sokal and Bricmont were primarily considering Lacan from the point of view of critical theory. So their criticisms while interesting and to the point do not necessarily refute everything Lacan said about psychoanalysis. Not everybody would agree with me on this particular point, Richard Dawkins responds to replies such as mine as follows:

 “Perhaps he is genuine when he speaks of non-scientific subjects? But a philosopher who is caught equating the erectile organ to the square root of minus one has, for my money, blown his credentials when it comes to things that I don’t know anything about. (Richard Dawkins: Review of Intellectual Impostures p. 1)”

I must admit that I am very sympathetic to Dawkins point of view here, when theorists make such stupid claims they do lose a lot of credibility. However, I think that history does teach us that clever people can be brilliant at one topic and talk nonsense on other topics: consider how Francis Collins can be brilliant at Genetics and hold silly views on God, or how Newton was a brilliant Physicist and yet held silly views on Alchemy. So I think that Dawkins claim aside we should study whether Lacan’s views on psychoanalysis hold up to critical scrutiny. Dylan Evans who I quoted above is a different case than Dawkins, Sokal or Chomsky because he is trained as a Lacanian Psychoanalyst and is a scholar on Lacanian theory. Evans began life with a degree in Cognitive Science before training as a Lacanian Psychoanalyst and enrolling in a PhD programme studying Lacan. He wrote a number of books on Lacan including his famous ‘Dictionary of Lacanian Terms’. His books are still used as references in Lacanian Studies. However while practicing as a psychoanalyst Evans noted that using Lacanian theory was not helping his patients, for this reason stopped practicing as a psychoanalyst. He returned to university to study Lacan for his PhD before leaving in disgust when he discovered that Lacan’s theories did not hold up to critical scrutiny but his colleagues did not seem to care. So Evans left Buffalo University and went to The London School of Economics where he compared Lacanian Psychoanalysis with Evolutionary Psychology and came to believe that Evolutionary Psychology is the scientific way to study the mind while Lacanian Psychoanalysis is an unfalisifiable series of dead ends.

Evans claimed that not only was Lacan treated as a cult leader by his followers, Lacan in turn tried to save Freud from having his biological hypotheses refuted by translating them into a series of metaphors[1]. However Evans correctly noted that this approach in effect robbed Freud’s theories of any claim to scientific status:

“This strategy was doomed, however. It appeared to save Freud’s work from refutation by modern biology, but at the price of removing all empirical import. The biological Freud was wrong, but at least he advanced clear, testable claims. The cultural-linguistic Freud that Lacan invented, on the other hand, was completely untestable. He was not merely impervious to contradictory evidence in biology; he was impervious to any evidence at all. Lacan rescued Freud from a fatal encounter with modern biology by removing him from the world of science altogether. (ibid p.10)”

Imagine how unscientific Darwinian Theory would have been if a similar approach was adopted. So, for example, Charles Darwin had no workable mechanism to explain heredity. This doesn’t bother evolutionary theorists, Darwin was a clever human who lived at a particular era and simply hadn’t the tools to solve the particular problem (he wasn’t aware of Mendel’s work). But the aim of evolutionary theory is not to say that Darwin is correct on everything but to construct the best theory to explain the empirical data. Neuropsychoanalysts, like evolutionary theorists, are not interested in proving every claim made by a particular individual correct; rather they are interested in constructing a correct theory of the mind. Neuropsychoanalysts simply think that a lot of contemporary theories support Freud but they are modifying Freud’s theories as new data comes in. This is the scientific method, and if Lacanians cannot adhere to it they should not be licensed as practicing psychoanalysts.

It should be noted that the idea of a pure Lacanian Analyst does not capture what goes on in typical psychoanalytic sessions. Psychoanalysts are typically pragmatic in nature they use whatever tools they can to help their patient. An analyst will usually understand psychology; biology and psychoanalysis quiet well and will adapt and change their approach as they learn more and more about human nature through their daily analytic practice as they interact with their patient. Only an extremely bad analyst would stick rigidly to say Lacanian Theory in their practice. However since psychoanalysts are engaged in helping people in need we need to guard against worst case scenarios and think of analysts who damage their patients well-being by rigidly applying a false theory when interacting with their patients.

Lacan wrote an incredible amount of stuff on psychoanalysis, discussing a large variety of different things from Psychosis, to Neurosis, and Hysteria. He used examples from literature, and philosophy in his writing and his dense prose is nearly impenetrable so discussing his work in detail would take a multi-book series. It is a difficult task to analyse a theorists views in a blog. However this is the task I will undertake here. In this my first Blog I have set up the problem and shown how I aim to evaluate it. In the next blog I will outline Lacan’s theory and discuss what if any aspects of it can be made sense of empirically. I will discuss his use psychologist Henri Wallon (1931) work which showed that at 6 months humans and chimpanzees recognise themselves in the mirror, the infant becomes fascinated by his image while the chimpanzee quickly loses interest in his own image. I will show how Lacan used Wallon’s Data to construct his own concept of the Mirror Stage in the child’s development.  I will argue that Lacan use of the Mirror Stage is much too simplistic and needs to be modified in light of the mountains of empirical evidence discovered in developmental psychology over the last 80 years.

I will then discuss Lacan’s claim that the child’s self is constituted by being grafted onto the language of the other. I again show that developmental psychology strongly indicates that Lacan is incorrect in the way he conceives of this issue.

In my final Blog I will discuss how Lacan uses his three concepts of The Real, The Symbolic and The Imaginary to deal with neurosis, psychosis, and hysteria. I will discuss how his theories on these topics need to be modified in light of recent developmental and neuroscientific data. I will conclude my third blog by arguing that if Lacanian Psychoanalysis is to become a scientific treatment it needs to subject itself to falsification by the standard methods of science in the same way that Freudian Ideas are by its merger with Neuroscience.


[1] This tactic has been used by theologians. In the past the bible was treated as the literal word of God. However whenever science refutes an aspect of the bible it is then claimed that the bible should not be treated as literal. A similar approach has been used to interpreting Wittgenstein see Cray and Read eds ‘The New Wittgenstein’ where the early Wittgenstein is attempted to be rescued from refutation by similar verbal moves (Peter Hacker correctly pores cold water on this tactic in his chapter in the book.)