Monthly Archives: March 2014

HALLUCINATIONS AND THE HARD PROBLEM: TOM CLARK, SACKS AND DENNETT

HALLUCINATIONS AND THE HARD PROBLEM: TOM CLARK, DENNETT AND SACKS

In chapter 6 of his ‘The Mind’s Eye’ Oliver Sack wrote up his diary entries of his experiences after he had been diagnosed with an ocular melanoma. Sacks had radiation treatment to try to kill the melanoma. After the treatment he had his damaged eye covered with a thick patch. He reported the following strange experience he noticed few days after his treatment:

“If I have been looking at something and then close my eyes, I continue to see it so clearly that I wonder whether I have actually closed my eyes. A startling example of this happened a few minutes ago when I was in the bathroom. I had washed my hands, was staring at the washbasin, and then, for some reason, closed my left eye. I still saw the wash basin, large as life. I went back into my room, thinking that the dressing over the right eye must be absolutely transparent! This was my first thought and, as I realised a moment later, an absurd one. The dressing was anything but transparent- it was a great wodge of plastic, metal, and gauze half an inch thick. And my eye, beneath it, still had one of its muscles detached and was in no position to see anything. For the fifteen seconds or so that I kept my good eye closed, I could not see anything at all. Yet I did see the washbasin- clear, bright, and as real as could be.” (The Mind’s Eye, p. 160)

In the Analytic Philosophy Facebook page Tom Clark (Creator of the Facebook Naturalism page) used a hallucination of music produced by the same brain area as perception of music to illustrate what he thinks the hard problem of consciousness is. Tom’s conclusion on the topic is that the music hallucination shows that our experiences are internal in both: experiences of real world objects and of hallucinations of real world objects. He argues that this makes it very clear what the hard problem of consciousness is: how does the grey matter in our brains produce our whole worlds of experience?  So while we can explain what the neural correlates of an experience are we still haven’t explained how that experience arises. I think that Sack’s experience also makes clear the supposed internal nature of experience and hence what the hard problem of consciousness is.

Dennett famously discussed the nature of hallucinations in his ‘Consciousness Explained’ pp. 3-18. I think that it is worth considering what Dennett’s views on hallucinations are and how he would try to handle Tom’s evidence. Tom is of course a former PhD student of Dennett’s so he knows his work well, and he obviously isn’t impressed with Dennett’s arguments on the topic. So this brief blog is for my own clarification rather than a criticism of Tom. However I would welcome any feedback from Tom on the topic.

Dennett of course does not deny that we can experience hallucinations of music on p. 4 of ‘Consciousness Explained’ he makes the following point:

“Not wanting to horrify you, then, the scientists arrange to wake you up by piping stereo music (suitably encoded as nerve impulses) into your auditory nerves. They also arrange for the signals that would normally come from your vestibular system or inner ear to indicate that you are lying on your back, but otherwise, numb, blind. This much should be within the limits of technical virtuosity in the near future-perhaps possible even today.” (Consciousness Explained, p. 4)

Dennett is discussing the possibility of scientists being able to create virtual worlds that could fool us. He does not doubt that we could be fooled by musical hallucinations. So he would presumably have no problem with the article that Tom Clark shared, obviously though he wouldn’t draw the same conclusions though.

Dennett argues that it is pretty much computationally impossible to build a brain in a vat that could be fooled into thinking that it is a subject walking around the world experiencing this and that. He admits that there are some cases where we experience mild hallucinations like phantom limbs, after images etc. However he thinks that strong hallucinations of the brain in a vat kind are pretty much impossible. He defines a strong hallucination as “a ghost that talks back, that permitted you to touch it, that resisted with a sense of solidity, that cast a shadow, that was visible from any angle so that you might walk around it and see what its back looked like” (ibid p. 7)

Now in the Sacks quote above he describes seeing a scene that is indistinguishable from reality. However notably he takes a passive attitude towards the scene he doesn’t move to see if it changes perspective, he doesn’t try to touch the scene etc. Sacks attitude is one that Dennett claimed was typical of people experiencing hallucinations:

“But that review also provides a clue leading to another theory of the mechanism of hallucination-production: one of the endemic features of hallucination reports is that the victim will comment on his or her unusual passivity in the face of the hallucination. Hallucinators usually just stand and marvel. Typically, they feel no desire to probe, challenge or query, and take no steps to interact with the apparitions.” (ibid, p. 9)

It is interesting that when Sacks had his hallucination, which he claimed lasted for 15 seconds he stood passively looking at the hallucination without moving about and probing it just as Dennett’s model predicts. Dennett argues that hallucinations rely on mechanisms of the brain “expecting” us to only interpret the data along certain lines. So Dennett would presumably think that the article posted by Tom Clark does not really effect his position at all. However Tom Clark would presumably reply that on contrary his example refutes Dennett’s position. The hallucinations while not strong in Dennett’s sense still are reported to occur by subjects and brain imaging supports some of these reports. Since on pain of breaking Leibniz’s law these experiences cannot be identical with particular brain states we do indeed seem to be faced with a hard problem.

The problem is not unique to musical hallucinations. It is well known that two thirds of occipital lobe are lit up when people report experiencing mental imagery and Dennett has written about this (see his 2003 Reply to Pylyshyn). However he does not draw the same conclusion as Clark. The reason for this is that according to Dennett it only SEEMS to us that we are experiencing these hallucinations and mental Images; we are not actually experiencing them. Clark on the other hand argues that we manifestly are experiencing these hallucinations, it is not that merely that it seems to us we are. This is the crux of the debate between Clark and Dennett. Not whether people hallucinate, but whether hallucinations are “Real Seeming” painted in Cartesian theatre by some kind of ghostly substance. I will briefly discuss Dennett’s take on the Colour Phi illusion to illustrate his notion of real seeming.

DENNETT ON REAL SEEMING[1]

In his Content and Consciousness Dennett talks about the fact that while it may seem to some people that they have mental imagery, closer examination reveals that what they call a mental image is really only a description. Twenty five years later in his Consciousness Explained when discussing Kosslyn’s experiments on mental imagery, Dennett noted that despite appearances mental imagery is really all tell and no show. One curious thing about Dennett’s view is the fact that he claims that despite the way things seem mental imagery is really a mental description. What is strange about this view is the fact that a description can seem like an image. This is a very odd way to understand the word ‘seem’. A paradigm example of an x seeming like it is a y is given by Descartes. He talks about how a stick which is in transparent water will seem to be bent because of light refraction, though in reality the stick is not bent. What Descartes means by the words ‘seems to be’is the same as ‘appears to be’; and this of course is the standard meaning of ‘seems to be’. However, even to a weak imager like me, it is patently obvious that mental images are nothing like mental descriptions. If something really seemed (as in appeared) to me like an image, then it follows that I would have an experience of something image-like, and a description is in no way image-like. This leads to the question of what Dennett could possibly mean when he admits that it at least seems to some people that they experience mental images?

In Consciousness Explained Dennett carefully explains what he means by the word ‘seems’; evaluating his views on this will help clarify his strange beliefs about the nature of images. In Chapter 5 Section 5 of Consciousness Explained Dennett discusses the colour phi experiment[2]. In this discussion he makes explicit his strange views on the nature of ‘seeming’.  The colour phi phenomenon is apparent motion. We see examples of it on our television screen every day, where a series of still pictures are flashed one after the other at a certain speed to create the illusion of motion. Dennett discusses a simple example of colour phi where two spots separated by as much as 4 degrees of visual angle are flashed rapidly, creating the illusion of one spot moving back and forth rapidly (ibid, p.114). Kolers and Grunau (1976) did a phi experiment with two dots, one red and one green, flashing on and off. This gave the illusion of a red spot starting to move and changing colour to green mid-passage. Since the red dot is not moving and does not turn into a green dot we need to ask what is going on with this illusion. As the red dot moves we see it turn green as it moves towards its destination. The question is: how do we see the dot turn green before we actually see the green dot? One might think that the mind must first register the red dot and then the green dot and after that point the apparent motion must be played before the mind’s eye. To think this, way Dennett warns, is to demonstrate that one is still in the grip of the metaphor of the Cartesian Theatre (ibid p. 115).

To loosen the grip of this picture on our minds Dennett discusses two fictional processes which one could attribute to the brain. He calls them the Orwellian Process and the Stalinesque Process.  The Orwellian Process occurs when I misremember something because my brain tampers with what I remember so that I no longer remember accurately. The Stalinesque Process is where the brain projects a false picture of reality into the mind’s eye. Dennett notes that while a distinction between Orwellian and Stalinesque processes makes sense in the external world it is an illusion to assume that it makes sense as an explanation of what is going on at the level of the brain.

Let us think of both of these processes as they apply to the case of colour phi.  In the Orwellian case we did not see the apparent motion; our brain merely revised our memory and informed us that we did see the motion. In the Stalinesque case we unconsciously registered the two dots and afterwards our brain created a kind of mock event for us to watch. Dennett notes that once we give up the notion of Cartesian Materialism, we will see that there is no answer to the question of whether the Orwellian or Stalinesque process took place. He puts things as follows:

So here is the rub: We have two different models of what happens to the color phi phenomenon. One posits a Stalinesque “filling in” on the upward, pre-experiential path, and the other posits an Orwellian “memory revision” on the downward, post-experiential path, and both of them are consistent with whatever the subject says or thinks or remembers…Both models can deftly account for all the data-not just the data we already have, but the data we can imagine getting in the future (ibid, pp. 123-124)

So there is no fact of the matter which can decide between the two different stories. Dennett argues that the reason that we cannot decide between the two accounts is that there is really only a verbal difference between them. With Dennett’s rejection of Cartesian Materialism and his alternative multiple-drafts theory of consciousness we can no longer draw a non-arbitrary line to decide when an event becomes conscious.  There is therefore no fact of the matter as to whether an Orwellian or a Stalinesque process took place.

When Dennett claims that we cannot decide between the Stalinesque and Orwellian alternatives we are left with what seems like a mystery. In the external world a red object is not really moving and turning into a green object, yet Dennett is also denying that a Stalinesque show trial is played before the mind’s eye.  So the obvious question is: where does the movement of the ball occur? Dennett’s answer is that the ball does not move and turn green it only seems to. However, to say that a ball seems to move is to say that people have an experience of the ball moving. And this leads us back to our original question: what generates this experience, and how is it generated? Dennett thinks that this is a bad question because the brain does not need to create an experience of the ball moving; it merely has to form a judgment that such movement occurred:

The Multiple Drafts model agrees with Goodman that retrospectively the brain creates the content (the judgment) that there was intervening motion, and that this content is then available to govern activity and leave its mark on memory. But the Multiple Drafts model goes on to claim that the brain does not bother “filling in” the blanks. That would be a waste of time and (shall we say?) paint. The judgment is already in, so the brain can get on with other tasks. (ibid, p. 129)

This claim of Dennett’s is extremely strange. He is claiming that the brain judges that the motion occurred. However, as a matter of fact, we do not experiencethe motion; we only think we do.  The obvious reply to this is to categorically state that I do experience the movement and I judge that the movement occurred based on this experience. In other words, the experience is prior to the judgment. The experience is not of a fact in the external world (where no movement occurred); it is rather an experience of a person’s subjective qualia. When Dennett denies that it is the experience that leads to the judgment, he is leaving the phenomenal experience out and is focusing entirely on access consciousness.

The claim that Dennett is denying the existence of phenomenal consciousness is on the face of it an incredible claim. So before proceeding it is important that we show that this is indeed Dennett’s position. To demonstrate that this is indeed Dennett’s position I will now provide detailed quotes from him to make clear his position.  When discussing phenomenal space Dennett makes the following claim:

Now what is phenomenal space? Is it a physical space inside the brain? Is it the onstage space in the theatre of consciousness located in the brain? Not literally. But metaphorically? In the previous chapter we saw a way of making sense of such metaphorical spaces, in the example of the “mental images” that Shakey manipulated. In a strict but metaphorical sense, Shakey drew shapes in space, paid attention to particular points in that space. But the space was only a logical space. It was like the space of Sherlock Holmes’s London, a space of a fictional world, but a fictional world systematically anchored to actual physical events going on in the ordinary space of Shakey’s “brain”. If we took Shakey’s utterances as expressions of his “beliefs”, then we could say that it was a space Shakey believed in, but that did not make it real, any more than someone’s belief in Feenoman would make Feenoman real. Both are merely intentional objects. (Ibid, pp. 130-131)

The above passage is very instructive. It speaks to our topic of mental images and again shows that Dennett thinks of them as theorists’ fictions. Furthermore, his invoking of Shakey, who despite its verbal reports is not experiencing any items in phenomenal space, shows that Dennett thinks that we, like Shakey, despite our verbal reports are not experiencing anything in phenomenal space. Dennett is claiming that our brains may tell us that we have such and such experiences, and as a result of this brain report we form the judgment that we saw a red light move and turn into a green light. However, this judgment, despite appearances, is not grounded in a phenomenal experience.

It is worth noting that a lot of thinkers misinterpret Dennett’s claims on ‘seeming’ and colour-phi as indicating that he denies that we experience colours. This is not the case. Dennett’s arguments above only apply to colour hallucinations, Dennett tells a different story about how we perceive colour in the external world.

To understand Dennett’s views on colours it is helpful to think in terms of the primary quality distinction. One of the central motivations for claiming that the world is in your head is the existence of secondary qualities. When one looks at a beautiful garden one sees a variety of different colours: like the bright yellow sun-flower, the green-grass, the multi-coloured butterflies, the blue-sky and the bright-yellow-sun. Since the seventeenth century people like Galileo and Locke have been telling us that colours do not exist in the mind independent world. Colours are effects of light reflecting off objects and hitting our retinas in a variety of different ways. The majority of scientists since Galileo accept this dichotomy between primary and secondary qualities. Primary qualities are: Solidity, Extension, Motion, Number and Figure, while the Secondary qualities are: Colour, Taste, Smell, and heard Sounds. One consequence of accepting this picture is that the world is not as it reveals itself to us in our experience furthermore colours do not exist in a mind independent world. A further consequence is that we have a rich world which we experience consisting of taste, smells and colours but this world exists only within our minds. So on this view we have a subject, who is presented with certain experiences, and only some of those experiences correspond with a mind independent entity. The Cartesian materialist who accepts this world picture has a difficult job on his hands. Nowhere in the brain is the experience of a blue sky located or a yellow daffodil located. He may be able to provide neural correlates for these experiences but he will not be able to point to the spatio-temporal location where the experience is and the subject is located. So presumably the Cartesian materialist will have to argue for a strong emergence thesis.

Rather than going down this road Dennett interprets the dichotomy between primary and secondary qualities differently than most contemporary theorists. Dennett has discussed the status of colours throughout his philosophical development: in particular in his 1968 Content and Consciousness, 1991 Consciousness Explained and in his 2005 Sweet Dreams: Philosophical Objections to a Science of Consciousness. I will now try to give a synoptic view of Dennett’s views on the topic of colours. In his first major discussion of colours he noted that while most believed that colours are secondary qualities and do not exist in the external world there are reasons to doubt this conclusion.

He centres his criticism in terms of language and what we are referring to when we use colour words. If we view colours as secondary qualities we are committed to the view that when I refer to something red I am referring to something within my mind. Now if we accept this view then when two people claim that they are referring to something red then they we don’t know whether they are referring to the same thing, as their inner experience of red may be different and, we cannot decide because we would have nothing public to compare their experiences to. Now if we do not want to admit the possibility that a teacher can never know when his pupil has actually learned the meaning of the word ‘red’ we must admit that the reference of colour words is to public observable entities.

One difficulty is that if one accepts the solution to the sceptical problem of colour reference by arguing that words refer to publically observable entities is that it leaves us with a conundrum of where we say that colours exist. They don’t exist in the mind independent world, and they don’t exist in the mind, and there is nowhere else to exist. So one is lead to the silly conclusion that colours do not exist anywhere. This conclusion must be wrong; and Dennett correctly notes that colour words refer to publically observable entities that one can be right or wrong about (Content and Consciousness, p. 161). So they seem to exist and seem to exist in a publically observable sphere.

For Dennett since colours are publically observable entities which we can be right or wrong about then they must be a property of the external world. This leaves Dennett with the question: what property exactly are they? He notes that colours are not reflective properties of surfaces which can be cashed out at a sub-atomic level. This is because:

“Although the sub-atomic characteristics of surfaces that reflect light predominantly of one wavelength can now be described in some detail, those different types of surface do not correspond neatly to the colours we observe things to be.” (ibid, p. 161)

Also different wavelengths of reflected light can cause the same colour experience in a person. So the job of characterising what property colours actually are is more complex than one might assume. Dennett notes that when a person is referring to red we will need to cash-out what property they are referring to in terms like: The person is referring to the reflexive property of x or y or z…(and the disjunction associated with one colour might be very long).

Dennett asks us: if the disjunction of properties which can be associated with a person’s experience of colour have little in common with each other, are we driven to the conclusion that colours do not exist? To think through this possibility considers colour blind people who have poorer discriminative capacities than us and a hypothetical alien who has colour discriminative capacities which are greater than ours. He notes that we would say that the colour blind man who may say that ripe (red for us) apples and grass are the same colour is suffering from a cognitive illusion. On the other hand if an alien had greater discriminative capacities than us so that it would constantly see things as changing colour, we would also say that he was experiencing colour illusions. This is because the meaning of colour terms is defined in terms of OUR discriminative capacities; which means that WE judge certain things in the world to be red, green etc. So relative to our form-of-life these other people would be suffering from a form of cognitive illusion.

Dennett concludes with the following statement:

Colour is not a primary physical property like mass, nor is it a complex of primary properties, a structural feature of surfaces. Nor again is it a private ‘phenomenal’ quality or an ‘emergent’ quality of certain internal states. Colours are what might be called functional properties. A thing is red if and only if when it is viewed under normal conditions by normal human observers it looks red to them, which only means: they are demonstrably non-eccentric users of colour words and they say, sincerely, that the thing looks red. There saying this does not hinge on their perusal of an internal quality, but on their perception of the object, their becoming aware that the thing is red (ibid, p.163)

I am not really sure whether Dennett really manages to avoid the problem of where the experience of red is located. However it should be obvious that he is not denying that colours exist, rather he is claiming that they are not paraded in a Cartesian Theatre.

My reaction to Dennett’s views on hallucinations and mental imagery are conflicted. On the one hand I find myself arguing that when I call a mental image to my mind’s eye I do damn well experience a mental image I don’t merely SEEM to experience it. On the other hand I find myself worrying that this reaction of mine is simply me giving my judgments about my subjective experiences papal infallibility, I simply judge that because it seems to me that x is presented before the mind’s eye then I cannot doubt that this is the case. In the case of perception of an object in the external world I can make judgements which can be intersubjectively tested and these tests can either weaken or strengthen my claim. Or even if there are no other people around who I can test my perceptions by seeing if what I am seeing is an illusion, by inspecting the conditions of viewing whether there is good light, I can walk up to the object, walk around it, touch it etc. In the case of hallucinations such intersubjective, bimodal tests are not possible. Likewise in the case of mental imagery I cannot test the lighting conditions or touch the images or walk around them etc. So it is extremely odd that I would accord myself papal infallibility in the case of hallucinations and mental imagery given that I cannot really test my judgements[3]. All I can really appeal to is the fact that I intuitively know I experience mental imagery. However if the last three hundred years of science have taught us anything it is that we cannot uncritically be overly reliant our supposed certain intuitive judgements (nor should we ignore them). So I go back and forth on this issue and have never really solved it. Tom Clark feels certain that REAL SEEMINGS exist. Matt Bush argues that they don’t they are just a product of the manifest image. I have never really made up my mind on the matter. Recently Matt has informed me he may have an argument for the existence of conscious experience that does not rely on the myth of the given. Perhaps this argument will solve a lot of these difficulties. However I think my next project is to try and discover if ‘The Given’ really is a myth.

 

[1]This section on Real Seeming is taken from my much longer and more detailed take on Dennett’s view on Consciousness and Mind ‘Dennett and The Typical Mind Fallacy’. A reader interested in understanding in more detail how I understand Dennett’s view and where my view differs from his should check out that paper. They will find the paper in my blog and in my Academia.edu page.

[2]See Kolers, P. A and Von Grunau (1985) “Shape and Color in Apparent Motion” Vision Research 16 pp329-335.

[3]I discuss the rotation tests and neuroscientific tests of Kosslyn et al. In my ‘Dennett and The Typical Mind Fallacy’ but those third person tests are not the reason people believe they have imagery, rather their purported experiences are the reasons.

Psychoanalytic Interpretation and Unconscious Phantasies: Part 1

     PSYCHOANALYTIC INTERPRETATION AND UNCONSCIOUS PHANTASIES

“In fact we are far from having to employ mere imagination or blind guess work, as even regards the first year of life. When all the observable facts of behaviour are considered in the light of analytic knowledge gained from adults and from children over two years, and are brought into relation with analytic principles, we arrive at many hypotheses carrying a high degree of probability.” (Susan Isaacs: The Nature and Function of Unconscious Phantasy, p. 2)

When one reads the work of Melanie Klein, one is struck by the bizarre phantasies that she attributes to the child. In her book ‘The Psychoanalysis of Children’ Klein interprets the behaviour of children in terms of the child unconsciously phantasizing about ‘ripping the mothers breast apart’, or of phantasizing about ‘burning and drowning his mother with his urine’ etc. When reading Klein’s views one is left with the impression that she is massively over interpreting the behaviour of the child. In Quine’s Radical Translation thought experiment he shows that when trying to translate the utterance from a native of a tribe who we have never had any previous contact with there are massive problems of indeterminacy[1] of the reference of observation sentence. Quine argues that if when a rabbit runs by and the native shouts ‘Gavagai’ we cannot automatically assume that ‘Gavagai’ means ‘Lo a Rabbit’. Furthermore if we try to interpret ‘gavagai’ as a term referring to rabbit our problems become worse because there from a behavioural perspective we have as much evidence that ‘gavagai’ refers to undetached rabbit part, particular instance of universal rabbit hood, or It rabbiteth (a feature placing term on a par with it rains) than we do that ‘gavagai’ refers to rabbit.  Some thinkers argue against Quine that interpreting  ‘gavagai’ as referring to Rabbit as opposed to the other options, has more intuitive psychological plausibility. Such thinkers argue that because of the innate structure of the human mind we will divide up the world similarly and hence we are justified in thinking the Underdetermination of the reference of ‘gavagai’ is overcome by our innate biases.  Now proceeding argument is not a proof that ‘gavagai’ refers to rabbit but it at least provides some kind of evidence that our interpretation of the reference of the native’s term is correct. However, Klein’s interpretation of the Childs unconscious phantasies is even more massively underdetermined by data of experience than Quine’s examples, and furthermore Klein’s interpretations are also wildly counter intuitive. The old saying goes that incredible claims require an incredible amount of evidence. So in this blog I will try to look at the type of evidence used by psychoanalysts to support their more elaborate claims about childhood phantasies and cognition.

One of the best expositors of the nature of Phantasy was Susan Isaacs. She made the following claims about the evidential procedure used by psychoanalysts when attributing to the child Phantasies:

“When we turn to children under two years, we bring certain proved instruments of understanding to the study of their responses to stimuli, their spontaneous activity, their signs of affect, their play with people and with material objects, and all the varied aspects of their behaviour. First, we have those principles of observation already outlined, the value of observing context, of noting precise details, and as regarding the data observed at any one moment as being a member of a series, which can be traced backward to their rudimentary beginnings and forward to their more mature forms. Secondly, we have the insight gained from direct analytic experience into the mental processes so clearly expressed in similar types of behaviour ( continuous with these earlier forms) in children of more than two years; above all, the evidence yielded by the repetition of situations, emotions, attitudes, and phantasies in the ‘transference’ during the analysis of older children and of adults.” (ibid, p. 8)

The evidence that Isaacs’s claims are relevant to interpreting the child is the primary tools of science, behavioural evidence and experimental evidence. However, she also appeals to evidence from analysis of older children and adults as a way of rationally reconstructing the childhood phantasies.

In his (1985) ‘The Interpersonal World of The Infant’ Daniel N Stern followed Isaacs’s method and expanded on it by combining insights from both developmental psychology and from the clinical experience of psychoanalysts. The psychodynamic approach is a technique which involves the analysand trying to rationally reconstruct his experiences with the analyst that he recounts them to.  Psychoanalysts from Freud through to Klein, and Bion have interpreted various different kinds of neuroses, and psychosis as resulting from experiences stemming from childhood which result in people getting stuck in some earlier developmental phrase.

So, for example, Freud’s model had his famous psychosexual stages of development at its core. He argued that through analysing patients it was clear that patients who were obsessional and controlling were stuck at the anal stage of development (which occurred between the ages of 18months and 30months). The anal stage of development occurs when the child is being toiled trained, if a child goes through harsh toilet training with their parents they may later develop an anal retentive personality. Stern correctly noted that one problem with Freud’s model is that it’s very specific predictions simply do not obtain. Another problem is that the contradictory models of Anna Freud, Klein etc all make different developmental predictions which also do not obtain. Stern argues that psychoanalytic models can be made more accurate by being merged with the empirical data which has been obtained by developmental psychology.

However he also thinks that developmental psychology can be enriched by the insights being gained daily in clinical practice. Developmental Psychology is great at constructing behavioural tests which can be used to show what the child’s preferences are. Habituation tests which indicates when a child is surprised by a change in their environment can be used to indicate the child’s expectations about the behaviour of the world and hence can be used to indicate the ontology that the child expects of the world.  We saw above that Susan Isaacs’ argues for a similar approach to Stern in her paper ‘The Nature and Function of Unconscious Phantasy’; she too argues that we need to merge behavioural/developmental data with psychoanalytic interpretation. She uses Freud’s 1922 discussion as a clear example of where psychoanalysts interpret the behaviour of a child psychoanalytically.

Freud spoke about a child who was very obedient and good at coping with being separated from his mother. He noted that when the mother was gone the child would gather all of his toys and hide them saying ohhhh (which Freud interpreted as meaning gone). For Freud the child was using the toys as a symbol of his mother. Later Freud noticed the child playing with a wooden reel with a piece of string wound round it. The child would throw the object away and say ohhhh (meaning gone) and then pull it back and say da which Freud interpreted as meaning back. Freud claimed that the child was using the game to cope with the loss of his mother. In the child’s Phantasy the wooden reel represented his mother and by being able to control the toy he was symbolically assuring himself that he has control of the mothers absence. Freud claimed that at a later stage he observed how the child dealt with a prolonged absence from the mother. When the mother returned from her absence the child was looking in the mirror and covering his eyes saying ohhhh (gone) and then uncovering his eyes and saying da (back). So by controlling his own disappearance and re-appearance the child in his Phantasy was giving himself control over an emotionally distressing situation.

Freud’s interpretation of the child’s behaviour has some plausibility to it. However, from a behavioural point of view there is little to recommend it over the assumption that the child is simply playing a game without any unconscious attempt to master emotional difficulties. Furthermore if one were to interpret the child as simply playing a game one would have a simpler explanatory theory than Freud’s more than elaborate reconstruction in terms of unconscious Phantasy.

One must therefore ask what is the evidence that Freud can bring to bear to support his more complex interpretation of the child’s behaviour? Freud can point to the fact that children are usually distressed when they are separated from his mother. He could further point to the fact that the child he is talking about has managed to deal with this stress in a successful manner (unlike a lot of other children), and the fact that the child games indicates a symbolic mastery over disappearance and re-appearance indicates a causal connection between the two behaviours. Furthermore various analysts could point to their own private practice with children which does indeed indicate that children play reveals their unconscious concerns and phantasies about reality.

However there is a serious difficulty with the above reply. Different theorists within the psychoanalytic community have different interpretations of the same behavioural patterns. So, for example, people using Klein or Lacan or Anna Freud’s theories will interpret the behaviour of children differently because their different theories result in them organising the data of experience differently. Typically analysts argue that their interpretation is proven to be correct because of the pragmatic success of their therapeutic interventions. However, the best empirical evidence we have so far does not indicate that any of the above theories outperform each other in terms of pragmatic success in analysis. Therefore we are presented with a situation where there are alternative interpretations of the childhood experiences which all cohere with observational data and which result in equally successful treatments. A possible way to decide between these alternative theories is to see which of them best coheres with current developmental data. This will give us some traction in the debate.

So in my next blog I will consider Lacan and Klein’s different theories of the childhood development between birth and 2 years old. I will consider their theories in light of current evolutionary psychology, neuroscience, and most importantly in terms of contemporary developmental psychology. The theory which best coheres with our current scientific knowledge is more LIKELY to be the more accurate theory and to stand up to further experimental tests. In my final blog I will consider Klein and Lacan’s different clinical theories of hysteria.


[1] For a detailed look at Quine’s Indeterminacy of Translation argument see my blog Indeterminacy of Translation and Innate Concepts.

PSYCHOANALYSIS AND SCIENTISM: A DIALOGUE WITH MAREK SINASON

PSYCHOANALYSIS AND SCIENTISM: A DIALOGUE WITH MAREK SINASON

            In a recent blog “Symington on Psychoanalysis and Intellectual Disability” I criticised a hypotheses of Neville Symington on the cause of low IQ in people with intellectual disability. I argued that Symington’s hypothesis did not stand up to critical scrutiny and that he should take into account facts from developmental psychology and neuroscience to help him develop a superior account. Marek Sinason a neuroscientist disagreed with my criticism. Our debate got very heated and both of us felt that the other had inadvertently misrepresented the views of the other. At the end of our dialogue Marek suggested that I re-read his criticisms and this will demonstrate that I was misreading him. So I decided that in this blog I would try to revisit our discussion and to see whether we can clarify what our beliefs actually are and perhaps find some common ground.

To some degree our dispute is unusual. I am a philosopher who supports philosophical naturalism. So a large part of my time is spent defending attempts to naturalise epistemology and to naturalise metaphysics. Another area of research I am deeply involved in is attempting to explain: intentionality, consciousness, and language naturalistically. Marek on the other hand is a scientist who thinks that science has scopes and limits and it should not be extended beyond those limits. He argues that areas like folk psychology, psychotherapeutic dialogue, and literature have their own evidence and their practices are not diminished by not being reduced to physics and chemistry.

Marek’s position did not centre primarily on issues of scientism. His first criticism of my view was that the interdisciplinary research was not always beneficial. He gave some examples where interdisciplinary research would not necessarily be beneficial:

While many professions may benefit from learning of the fruits of others it is another thing to imply they are deficient without them. For example, geography teachers may benefit from learning about neuroscience, child development, programming, criminal law, Shakespeare…yet they may be very good teachers without the benefits of this additional expertise.

As a thought experiment, I find it useful to consider what are the realms of knowledge of other disciplines you would consider important for an adult mathematician to know about to not consider them deficient or worse, to consider the products of mathematics redundant?”

My reply was that I wasn’t arguing for interdisciplinary research in all areas and I agreed with his geography teacher example. I see no reason why a geography teacher necessarily needs to be an expert on any of the above topics. However in the case of psychoanalysis since they are making developmental claims and claims about the mind/brain they obviously have a duty to know the relevant neuroscience and developmental psychology. Our dispute on this issue went back and forth for a while. I continued claiming that as psychoanalysis is an empirical scientific discipline then its practitioners should know relevant facts that cast doubt on the truth of key claims they make. Here Marek disagreed with my characterisation of psychoanalysis as a science and pointed out that not all analysts agreed with this claim:

 “While there may be psychoanalysts who think the interdisciplinary work with neuroscience is worthy of a new discipline Neuropsychoanalysis. Others, such as Wilfred Bion, who served in a Tank Commander before studying medicine and then psychoanalysis did not consider those without this experience or interest were deficient.”

It was here that our dispute became interesting and a lot more heated. I argued that psychoanalysts are making claims about developmental issues and these issues are in fact scientific claims whether analysts acknowledge this or not. I further argued that such psychoanalysts are not doing their jobs if they do not modify their theories as we learn new scientific facts about the world.

Marek’s reply to me was that my argument presupposed that I knew what the objective truth on all these developmental psychology matters was. In other words that I was naively assuming that there was one true theory of child development and that Symington was ignoring this one true theory:

“Is Spelke’s wrong or right about intellectual differences between men and women? What is the empirical truth? What would you like ALL psychoanalysts and ALL psychologists to think on this matter?”

To this I replied that I did not have to be able to solve every question in contemporary psychology in order to judge that Symington had made a claim that was simply incorrect. I argued that Symington’s theory had absolutely no empirical evidence to support it and hence there was no reason to believe it was true. Marek’s claim about disputes within current developmental psychology struck me as beside the point. So, for example, there are current disputes within medicine on various subjects but nobody thinks that this means that we cannot know that current research has more to support it than the ideas of Galen. Or that because contemporary cosmologists disagree about the nature of string theory then we have no reason to think their theories are superior to Thales’s cosmology.

When looking back to our above debate I noted that Marek never answered my above question. So I repeat it here. I would like him to answer the following questions: (1) Does he think that there is a fact of the matter as to whether Symington’s claim about the causes of low IQ  being the result of a psychic turning away from reality is true? (2) If he thinks there is a fact of the matter what is the evidence that we could bring to bear on it? (3) If he thinks there is no fact of the matter does he think the same thing holds true of physics. Does he think that there is no fact of the matter as to whether say the cosmology proposed by Stephen Hawking is more correct that the cosmology proposed by Thales?

I think that part of our dispute may have arisen from me thinking that his argument that the fact that there are debates within contemporary developmental psychology means that we cannot judge the truth or falsity of Symington’s theory. It was for this reason that I called Marek a postmodernist. I think now I may have misinterpreted his views so if he answers my above three questions this will help me understand his point of view on the matter. If I misunderstood his position I apologise. Obviously I would have good reason to oppose the introduction of “anything goes” postmodernism into psychology, in the same way I would  oppose it being introduced into medicine,  because such an introduction would cause a lot of vulnerable people to be put in harm’s way. However given that I cannot say for certain that Marek supports postmodernist views I should not have argued so aggressively in our discussion. I should note that buying into the metaphysics of ‘facts of the matter’ and correspondence theories of truth are not the only way to deal avoid the terrors of post-modernism, personally I prefer Quinean holism (with the idea that Coherence leads to Correspondence) splashed with pragmatism. However perhaps a discussion of this point should wait till Marek answers my three questions above.

I don’t want to continue to go back and forth on the same points. I concede that Marek is not denying that evidence is important in judging psychoanalytic theory. He is merely trying to propose that interdisciplinary research is not necessary always a good thing. I agree with this to this to some degree but think that with psychoanalysis research from neuroscience and developmental psychology is relevant. This is not to dismiss the importance of intersubjective dialogue between the analyst and analysand. The majority of cases of psychoanalysis and psychotherapy in general involve emotional and existential crises. So, for example, a person may develop trust issues because of his early relationship with his primary care giver. These trust issues may result in the person being isolated and feeling depressed. Working through these issues with an analyst can probably be done without any medication and be facilitated by an emphatic analyst. However there are other cases where analysts will hold beliefs that medication is a bad thing and that all depression is the result of some childhood trauma. In this case a person who suffers from purely endogenous depression will present himself to the analyst be advised that medication is a bad thing and proceed to spend 150 a week on therapy looking for a cure that never comes. Surely this type of case can be avoided by analysts having some biological training.  Having this training doesn’t mean that the analyst is acting as a rival to the doctor offering alternative diagnoses, rather it is just a way of educating analysts so they can be aware of patients who they may not be able to help without medical intervention. Nurses, are for example trained in basic psychology, biology, etc not so they can offer alternative diagnosis to doctors but so they can manage their patients care as best as possible and contact specialists as needed. I see no reason why psychoanalysts shouldn’t have similar training.

I acknowledge Marek’s point that interdisciplinary research is often full of people making naive comments that seem silly from the point of view of people who are experts in the relevant fields. I am not proposing witch hunts on people who make naive mistakes. It is impossible to be an expert on everything so howlers are always going to happen. Within linguistics experts in syntax often make claims about phonology that experts in phonology would consider outdated or naive. My point about Symington was that his theory was false and that this could have consequences for patient care so I think he could do with modifying his theory. I still think that I am correct on this point. If Marek disagrees with me I would be interested in his reasons for thinking that Symington’s hypothesis is correct.

I mentioned that analysts who are ignorant of biology could end up causing harm to their patients. I think the history of Hysteria (now known as conversion disorder) bears this out. Hysteria is a case where people have some of the following symptoms: Feeling Unwell, Lipothymias, Vague diffuse states of fatigue, angry outbursts. (Spasms, and Paralysis are normally only found in women not men), but multiple conversion pains are common in men, as well as fear of heart disease, digestive disturbances, poorly defined neurodigestive disorders (Dor p, 97). In hysteria patients typically present symptoms which they think represent a particular disease however their symptoms actually only correspond to their own inexact idea of their body. So a patient may present with heart trouble and be complaining about they think their heart is as opposed to where the heart actually is. When they are informed about this fact their symptoms are moved to the appropriate bodily position. It is for this reason that it is assumed to be psychosomatic.

Throughout history psychoanalysts have treated people with supposed hysteria when in fact such patients were suffering from the early stages of a neurological disorder like MS. Now if a patient presents himself to an analyst with what seems to be hysteria then the analyst needs to be mindful that what seems to be hysteria could be real neurological disorder. Neurologists sometimes have difficulties diagnosing disorders like MS, MMD, or Parkinsons in their early stages. Some psychoanalytic schools have no close ties with biology so they may not be in any position to recognise when a person’s symptoms are indicating a biological as opposed to a psychological disorder. So a patient could spend a long time and a lot of money in analysis until such time as their actual disorder overwhelms them. I think that with hysteria an analyst has a responsibility (which should be legally enforced) to work with a multi-disciplinary team. In fairness most analysts are very honest and responsible people who will insist that the patient submit himself for vigorous medical tests while undergoing analysis. However I don’t think this should be left to the individual analyst but should be a matter of health board and government decision. I am not sure if this counts as scientism in Marek’s book but it seems to be just good sense to me.

Near the end of our discussion Marek made a point which I think needs to be discussed further:

“To the extent we have communicated at all, that was by virtue of the (incomplete) powers of dialogic exploration. This facility of human relations affords many things including the explorations and evaluations of ideas and formulas that may or may not have any value or applicability in this Universe at any time, or at least not right now. We did not have recourse to construct a testable hypothesis and collect statistical data to interpret before replying to each other. When speaking with individuals rather than standard deviations, there are things that can be explored that may or may not generalise beyond the confines of that dialogue…yet a dialogue may still be, powerful, moving, funny or a waste of time.
I do not feel the need to ridicule or undermine the power of dialogue or other forms of human relations in order to appreciate the value and limits of modern science. If you were to sit through a lecture on the science of humour you might agree.”

I agree with Marek on this point to some degree. When communicating with each other were not constructing testable hypothesises explicitly and statistical data to interpret each other; not at the personal level anyway. At the sub-personal level we were noticing statistical patterns in what was being said to us, we were also our background knowledge to help us construct our arguments and trying to form pictures of each other’s cognitive models, theoretical commitments etc. Most of this was of course done unconsciously. A scientist could of course try to rationally precisely what was going on, however if they ignored that we are intentional agents engaged in process of trying to communicate with each other they would miss a lot. If a scientist focused entirely on say the physics and bio-physics of the situation they would have real pattern in our environment i.e. our intentional communication. So I agree that at this stage of our development the best way to interpret each other’s behaviour is through adopting the intentional stance.  This is the intersubjective level of discourse you speak of and I agree it is very important. It may be the case that eventually some form of eliminative materialism is vindicated and we can predict and control our sociological environment by adopting design level or physical level explanations of human behaviour, however I don’t think that we are anywhere near that level of explanation. So for now our best explanation is intentional explanations in terms of propositional attitude ascription. So if my friend tells me he will meet me at the St Patricks day parade at 2 o clock I will be able to accurately predict his behaviour ceteris paribus. This will not be a scientific explanation just a bland use of folk psychology that is good enough for most purposes. Freud suggests enriching our folk-psychology by attributing to people propositional attitude level explanations which we are unconscious but which influence our behaviour. I think that this does give us some kind of useful predictive and explanatory power. And I agree with Marek that employing this approach in intersubjective communication between analyst and analysand has real explanatory power and it is not power that is best explained in-terms of physics level descriptions. Not at our present level of development anyway.

Stereo-Sue as a Real Life Problem Of Problem of Mary

STEREO SUE A REAL LIFE PROBLEM OF MARY?

            In chapter 5 of his 2010 book ‘The Mind’s Eye’ neuroscientist Oliver Sacks discussed the case of Sue Barry, a lady who grew up cross eyed and whose eyes did not work in tandem, so she had to view things one eye at a time. As a result of her condition she was unable to view the world stereoscopically, so she lacked the ability to see the world in the same three dimensional manner normal people do. Sue was a neurobiologist who had read the works of Hubel, Wiesel and many other scientists on stereo vision. In 1996 at a shuttle launch party she met Oliver Sacks and discussed her lack of stereo vision with him. He asked her if she could imagine what it was she was missing as a result of lacking stereo vision. She said that yes she could because her knowledge of neurobiology helped her understand what the experience of stereo vision would be like. In 2004 Sue sent a letter to Sacks to inform her that she had treatment on her eyes and now had stereopsis. She noted the following about her experience of stereo vision:

You asked me if I could imagine what the world would look like when viewed with two eyes. I told you that I thought I could… But I was wrong”

            Now clearly Sue’s case has a lot in common with Frank Jackson’s thought experiment ‘The Problem of Mary’. In Jackson’s thought experiment Mary is a neuroscientist who knows everything about the physics, and neurobiology of colour vision. However she has been brought up in a black and white room so has absolutely no experience of colour. One day she released from her black and white room and she experiences colour for the first time. A substantial proportion of philosophers have argued that after being released from her room, she will be in awe of her experience of colour which will go beyond anything she could have predicted on the basis of her complete knowledge of third person science. So they argue that there are experiential facts which are not captured by a complete scientific understanding.

In some ways one could argue that Stereo-Sue is a real life Mary who supports the conclusion that there are experiential phenomena e.g. qualia which cannot be captured by third person science. However there are obvious disanalogies between the two cases. The most important one being that in the case of Mary it is stipulated that she has ALL of the scientific knowledge of colour while Sue on the other hand is an extremely well educated neuroscientist who obviously doesn’t know everything about the neuroscience and physics of stereo-vision.

Dennett has critiqued ‘The Problem of Mary’ thought experiment as being a simple intuition pump where the conclusion is more or less stipulated. He argues that one could just as easily draw the opposite conclusion that upon leaving the room Mary is not surprised because given that she knows all of the science she will know what the experiential facts will be like. Dennett’s point is that one can stipulate how Mary will react all we want, however the truth is we do not know how we she will react. The thought experiment is merely designed to make it seem plausible that there will be experiential facts which third person science cannot capture. However the thought experiment proves no such thing.

Now in the case of Stereo-Sue because she is a real person she obviously doesn’t know everything. So we cannot draw any conclusions as to whether knowing all of the relevant scientific fact about stereo vision would have enabled her to predict what the experience of stereo vision would be like. In Sue’s case the amount of scientific knowledge she had wasn’t enough for her to predict what her experience would be like if she had Stereo Vision. However it tells us little about whether she would have been able to predict accurately what the experience of stereo vision would have been like if she knew more third person science or if a la Jackson she knew all of the third person science. So clearly Stereo-Sue’s story has little bearing on ‘The Problem of Mary’.

Nonetheless the case of Stereo-Sue has led Oliver Sacks drawing the following dramatic conclusion:

She had discovered for herself that there is no substitute for experience, that there is an unbridgeable gulf between what Bertrand Russell called “knowledge by description” and “knowledge by acquaintance,” and no way of going from one to the other” (The Mind’s Eye, p. 131)

Sack’s view of the case of Stereo Sue is similar to that of people like Tom Nagel. Sacks thinks that her case demonstrates an unbridgeable gap between what we can know from third person science and what we can directly experience. However, there is no reason to draw the dramatic conclusion that Sacks does. Granted some of Sue’s descriptions are extremely interesting and extremely dramatic, she reports the following experience after a session with her developmental optometrist:

“I went back to my car and happened to glance at the steering wheel. It had “popped out” from the dash-board. I closed one eye, then the other, then I looked with both eyes again, and the steering wheel looked different. I decided that the light from the setting sun was playing tricks on me and drove home. But the next day I got up, did the exercises, and got into the car to drive to work. When I looked at the rear-view mirror, it had popped out from the windshield.” (ibid, p. 128)

“Today, I was walking by a complete horse skeleton in the basement of the building where I work, when I saw the horses’ skull sticking out so much, that I actually jumped back and cried out.” (ibid, p, 130)

Sue’s experiences are extremely interesting however they do not support Sack’s conclusion. If Sack’s wants to point to an unbridgeable chasm between experience and knowledge he will need a real life case of Mary. This will obviously never happen as it is extremely unlikely that humans will ever have a completed knowledge of the physical world.

Now Stereo-Sue could be used as a part of a kind of inductive argument. If we could discover a variety of different cases that of scientists who had similar impairments and who after an operation discovered experiences that they could not predict from their science. It could be argued that this is inductive evidence that in Mary’s case she is likely to be surprised as well. However such an argument would be extremely weak as there is no reason to think that a person with ALL scientific knowledge will have predictive powers even remotely analogous to those of contemporary scientists. Therefore the inductive argument fails.  So over all, it seems that despite what Sacks thinks there is no reason to think that Stereo-Sue case provides an unbridgeable chasm between experience and knowledge. It is unbridgeable for us today, but may not be for Mary.

Symington on Psychoanalysis and Intellectual Disability

A BRIEF DISCUSSION OF NEVILLE SYMINGTON’S: THE PSYCHOTHERAPY OF A MENTALLY HANDICAPPED PATIENT

I have recently been reading Neville Symington’s (2007) book ‘Becoming a Person’ which is a collection of papers describing his experiences as a psychoanalyst. In this interesting book he outlines how his own experiences undergoing psychoanalysis as well as his experiences as an analyst to people with psychosis, people who are psychopaths and people who are intellectually disabled. The book is a collection of papers written over a period of thirty years which outline how he has developed as a person through different aspects of the analytic experience.

Chapter 8 of the book is a discussion of his experience when acting as a therapist of an adult age 33 who had an intellectual disability. His clients name was not given in order to protect his privacy so a pseudonym was given: Harry Smith. Harry had an IQ of 59. Conventional psychoanalytic wisdom argues that only people who have and IQ at 120 or above can benefit from psychotherapy. So Symington was going against conventional wisdom when taking on Harry.

After reading the case study I decided to write a short blog discussing it. Before proceeding I should note I am not a therapist of any bent. So this blog is the opinion of an interested layman as opposed to a psychiatrist, a psychoanalyst, psychologist or therapist of any kind. I did study psychoanalysis as an undergraduate as part of my philosophy degree. Furthermore my PhD centred on the philosophy of science, mind and language. So I am very familiar with the evidential bases used in all of the main therapies. A few years back I underwent psychoanalysis for a year and a half. So I am familiar with the experience of psychoanalysis from both a theoretical point of view, and from the point of view of an analysand. I have also worked as a care assistant with people who have intellectual disabilities for over ten years. So I am aware from a practical perspective of some of the emotional difficulties which face people with intellectual disabilities throughout their lives. For these reasons I decided to give my perspective on Symington’s discussion of psychotherapy for people with intellectual disabilities.

I found his paper both very impressive and very disappointing at the same time. What I found impressive was his wisdom in helping Harry in the therapeutic setting; however I found his theoretical discussion seriously defective. I will begin with the positive and discus what impressed me about the paper.

According to Symington Harry came from a working class background, both of his parents were hard workers and well presented though the mother was a bit more dishevelled. The father, mother and Harry all lived together in a small flat. However the parents lived separate lives, the father lived in his own room which had a television, armchair, bed and all his own possessions with him. The mother and Harry had their own rooms but spent all of their time together. The father thought that Harry was much smarter than people gave him credit for and wanted Harry to get a full time and paid job. While the mother claimed that she just wanted Harry to be happy. Harry’s father was a former alcoholic and though he had not drank for years his mother took a dim view of the father and Harry was aware of this.

When Harry began his therapy he was obsessed with the question of his intelligence level and anxious about the conflicting opinions on his abilities that his family and careers had of him. He talked incessantly in his early sessions and found it difficult to concentrate on what his therapist said to him. His primary aim was to get his therapist to give an assessment of his abilities which he could file alongside the various different assessments he had already received. Symington instead of offering his own assessment listened to Harry and tried to get Harry to discover his own abilities.

As therapy went on it became obvious that (1) Harry held his mother as an idealised figure who he could not think negatively about. (2) His ego was almost merged with her ego (3) Harry found it difficult to separate fantasy and jokes from reality. Symington didn’t tell Harry these facts but helped him to work to them out himself. After Harry left the therapy Symington discovered from the social worker the following facts: (1) Harry was now unwilling to do any work task at the centre. (2) He stayed at home, but reported to his social worker from time to time and continued to wonder whether he should continue treatment again. (3) It was confirmed that Harry was able to conduct a conversation that would have been quite impossible before the therapy began. He was less pressured by inner anxiety. (4) He was more decisive and had a greater sense of autonomy in relation to life’s choices. (Becoming a Person, p. 137)

Overall then the therapeutic intervention was very successful, in that it helped Harry achieve a greater sense of self understanding and autonomy. Obviously from a single case study it is hard to draw any large scale generalisations. It seems that Symington’s practical wisdom in realising that offering Harry another interpretation to place alongside all of the other ones was vital in the therapeutic success. Likewise Symington’s ability to recognise the unconscious idealisation of the mother and partial inability of Harry to distinguish fantasy from reality were vital. However more importantly Symington was listening to Harry’s stories and helped Harry arrive at the understanding of his situation himself, it was this that gave Harry his greater autonomy. So Symington’s success was a combination of theoretical insight and practical wisdom.

Despite the success of Symington’s analysis of Harry there were aspects of his paper that I was less than happy with. In particular I disagreed with his theoretical explication of therapy for mentally handicapped patients. He explicated developmental delays in people with intellectual disabilities in the following way:

“In the subnormal patient there is an active force blocking the development of intelligence. This means that intelligence presents itself as a threat to the organism.” (ibid, p. 141)

“Now, if there is a regression to the foetal stage in development as Pierce Clark suggests, then to move in a new relation to mother as a result of birth is something powerfully resisted. The hatred of reality and new objects necessarily implies a violent detestation of intelligence. The patient desires to stay in a fixed relation to mother and does not want the emergence of intelligence” (ibid, p. 142)

The above characterisation of intellectual disability is to put it bluntly utter nonsense. Firstly people who have intellectual disabilities do so for a variety of different reasons. If one considers Genetic Disorders like Down syndrome, Fragile X Syndrome, Angelman Syndrome, and Williams Syndrome one can see the absurdity of Symington’s characterisation of the cause of low intelligence in intellectual disability. We understand the biochemical reasons for lower intelligence in all of these disorders to an incredibly detailed degree.  The same is true of Neurodevelopmental disorders which occur as a result of birth trauma e.g. and umbilical cord getting wrapped around the babies neck. If this results in damage to the motor control or general intelligence we can trace it to a lack of oxygen in particular areas in the brain. There is simply no need for Symington’s psychological hypothesis in the case of intellectual disability.

Probably the reason Symington was driven to his hypothesis was to explain the change that occurred in his patient which would not necessarily be predicted based on the idea of fixed brain damage. However we are learning that the brain is much more flexible and plastic than it has previously been considered (though it is not near as plastic as some hype would lead one to believe). So Symington’s success can probably be explained in these terms.

In recent times neuropsychoanalyst Marc Solms has been treating patients who have brain damage. Solms is an expert on brain function as well as the human element of psychotherapeutic treatment. Symington showed a real intuitive ability to help Harry with his treatment. It is imperative though that analysts offering such treatment follow Solms example and acquaint themselves with the relevant neuroscientific data to supplement their treatment.

Linguaform Philosophy of Mind: Part 3

In my last blog I considered Quine’s claim that it is necessary for a child to grasp the syntax of quantification before we will be justified in attributing an ontology to him. I showed that the experimental work of Soja et al. has refuted this claim.  In this blog I will consider the claim that we must make having a language as the criterion of being able think, because if we use the more fundamental criterion of saying that creatures who can reliably distinguish aspects of their environment can think, we will end up attributing thought to thermometers etc. This argument is supported by amongst others Brandom, and Davidson. I will argue here that the argument is not compelling as it relies on the untenable notion of intrinsic intentionality. I will focus my criticisms to Brandom and Davidson who are the best contemporary philosophers who argue against classification as evidence of thought.

The bogey that Davidson and Brandom are worried about is that by applying the criterion of reliable discriminative capacities as the mark of a thinker we are casting our net too wide. McDowell sums up Brandom and Davidson’s fear as follows:

“On this view, the differential responsiveness to danger manifested in an animal’s capacity for fleeing would not license crediting the animal with being aware of danger, any more than, say, iron’s differential responsiveness to moisture in its surroundings-a disposition to rust if there is moisture and not to rust if there is not-licenses supposing bits of iron can be aware of the presence of moisture” (McDowell: Conceptual Capacities in Perception, p. 133)

This fear of not being able to distinguish between legitimate intentional ascriptions and illegitimate ones is what motivates Davidson and Brandom. This fear is misplaced. From a scientific perspective it is legitimate to use an intentional locution if it gives us predictive control of phenomena that is otherwise not available. In the case of rust we have chemical explanations which can predict and control the phenomena in manners which go far beyond intentional explanations. However there is nothing wrong with using intentional explanations as a stop gap. In the case of humans we use intentional explanations in a lot of cases because of the practical difficulties of explaining human behaviour in terms of the design stance or the physical stance. Now it is an empirical question whether future science will eventually eradicate the use for intentional level explanations for humans. However at this stage of our scientific development it is clear that adopting the intentional stance towards humans is still a very fruitful approach. This is the pragmatic approach I adopt and one can see that it is heavily influenced by Dennett. I think that this approach solves a lot of the problems which face Davidson and Brandom. However before trying to justify this pragmatic approach I will first outline why Brandom and Davidson do not adopt it.

In his excellent 1994 book ‘Making It Explicit’ Bob Brandom outlined his theory of how thought emerges from implicit social practices which hold a normative force for the society that uses the practices. Brandom notes that Dennett’s Intentional Stance approach presupposes that intentionality is conferred on creatures by more capable intentional systems. Such intentional ascription involves concept using creatures operating in the “space of reasons” constructing theories to explain the behaviour of entities in their world. Brandom argues that the implicit social practices that govern our linguistic behaviour is where intrinsic intentionality lies. On his view Dennett’s adoption of the intentional stance relies on the notion of intrinsic intentionality.

Davidson also adopts more realistic approach to intentional ascription than Dennett does. In his 1990 ‘Real Patterns’  Dennett diagnoses the main difference between him and Davidson on this topic as centring on their different interpretation of Quine’s indeterminacy of translation argument. When speaking of the competing intentional interpretations Dennett makes the following point:

“When one wins and the other loses, it will look to the myopic observer as if one “theory” has scored a serious point against the other, but when one recognizes the possibility that both may chalk up victories, which permits either one to improve his theory by making adjustments, one sees that local triumphs may be insufficient to provide any ground in reality for declaring one account a closer approximation to the truth.

Now, some might say that this situation is always unstable; eventually one interpretation is bound to ramify better to new cases, or be deducible from some larger scheme covering other data, etc. That might be true in many cases; but- and this I think is the central point of Quine’s indeterminacy thesis-it need not be true of all” (Real Patterns: p. 304)

Dennett takes the indeterminacy of translation argument as showing that we can have two rival interpretation schemes which are both reliable predictors over the long run but which disagree in crucial respects. He thinks that his interpretation of the indeterminacy of translation has consequences for the degree of realism that is appropriate for one to adopt towards propositional attitude explanations:

 “How does this make me less of a realist than Davidson? I see that there could be two different systems of belief attribution to an individual which differed substantially in what they attributed- even in yielding substantially different predictions of the individual’s future behaviour-and yet where no deeper fact of the matter could establish that one was a description of the individual’s real beliefs and the other not. In other words, there could be two different, but equally real, patterns discernible in the noisy world…the choice of a pattern would indeed be up to the observer, a matter to be decided on idiosyncratic pragmatic grounds” (ibid, p. 305)

Davidson’s view of the indeterminacy of translation is less radical than Dennett’s version. When discussing the indeterminacy of translation Davidson explains it by invoking the analogy two different ways of measuring temperature:

When we use numbers to keep track of relations among weights and lengths and temperatures, we are not apt to respond that different sets of numbers do as well as others in keeping track of all that is relevant empirically by complaining that weights or lengths or temperatures are not ‘real’. We know that there is no contradiction between saying that the temperature of the air is 32 degrees Fahrenheit and saying that it is 0 degrees Celsius: there is nothing in this ‘relativism’ to show that the properties being measured are not real.” (Davidson: What Is Present to the Mind)

This is a much less radical view than Dennett’s view. The different measurement schemes can be translated into each other quiet easily. What Dennett is talking about is a situation where two interpretations of a person’s behaviour occur which are radically different and lead to different predictions of the agents behaviour. He is talking about a situation where there is not fact of the matter as to which interpretation is the correct one because a failed prediction does not falsify an entire interpretation scheme.

Since intentional stance ascriptions can give us divergent interpretations which can lead to different predictions and there is no fact of the matter as to which is the correct one Dennett argues that there is little sense in being a realist about such ascriptions. He thinks that in one sense we can argue that intentional ascriptions pick out real patterns however there will be divergent real patterns in the environment which can be picked out by agents doing the interpreting and there will be no fact of the matter as to what the correct interpretation is.

Davidson does not think that Dennett has given us good reason for adopting his instrumentalist approach to intentional ascription. As he sees it Dennett has confused two issues (1) Whether Propositional Attitudes are entities, (2) Whether there is a correct answer to the question of whether someone has the certain propositional attitude (Indeterminism and Anti-Realism, p. 82). To question (1) Davidson answers no. He notes that we don’t need to postulate beliefs as entities in order for them to be true of a person, anymore than we need to postulate an entity ‘Weight’ that a person has in order for it to be true that a person weighs a certain amount. On this point Dennett and Davidson are largely in agreement with each other.

On point 2 there are substantial disagreements between the two thinkers. Davidson doesn’t agree with Dennett’s claim that there is not a correct answer to the question of whether a certain propositional attitude is true of a person. Davidson makes the point that it is only when one is within language that truth and error come fully onto the scene. On Davidson’s conception language necessarily involves a self, (at least) another person and a shared object of experience which they are making claims about. For Davidson, the concept of objectivity is constituted by these social reactions to shared objects in the external environment.

If, for example, I am watching a person looking at the movements of an animal and react in a similar way to the way I would, and likewise he is looking at my reactions to the animal, we are judging us as creatures with similar orientations to the world. We can make judgements about what we think he SHOULD do in certain circumstances, for example, hide when he sees lion. Our concepts of right and wrong will co-evolve as we make interpret each other as acting or making claims about the world that we judge to be inconsistent with what the other believes about the world. So if we believe that the other thinks lions are dangerous, and the other doesn’t hide when a lion is present we might judge that the person WRONGLY thinks that the lion cannot see him. So, for Davidson, since our concepts of right and wrong, which rely on the notion of objective truth and falsity; are so intimately connected with our propositional attitude ascriptions, he argues that we must conclude that if anything is real our propositional attitude ascriptions are.   This is because the very notion of objectivity is parasitic on the intentional ascriptions.

Both Davidson and Brandom argue in similar ways that Dennett’s intentional stance approach does not work because we have objective grounds for attributing intentionality to members of our linguistic community. So if we do adopt Dennett’s intentional stance approach to non-linguistic creatures as a sort of pragmatic measure then we need recognise that these creatures unlike us do not have real intentionality.

Overall I am not very impressed with Davidson or Brandom’s argument for intrinsic intentionality. Both of them argue that intentional ascription is necessarily a social intersubjective phenomenon, and that intentional ascription to non-linguistic creatures is reliant on language using theorisers.

I agree that as we evolve our sense of right and wrong, true and false it is probably the case that we need to do so in a social environment with other creatures who we take to be believers. However I don’t think that this fact is really any reason to be a realist in the sense Davidson is about belief/desire psychology. Davidson takes the fact that intersubjective linguistic judgements are necessary to develop our sense of right and wrong to argue that if anything is objectively real it is our propositional attitudes. I see no reason to draw this conclusion. The fact that folk psychology may have played a role in our developing our sense of right and wrong does not vindicate folk psychology, the fact is that while folk psychology may play a key role in our development does not mean it is real in Davidson’s sense. If as we learn more and more about neuroscience we develop more accurate ways of predicatively understanding and controlling the behaviour of people then folk psychology will have been supplanted as theory. Until such time (if it ever comes) we are justified in using the intentional stance as a stop gap measure of explaining and predicting the behaviour of linguistic and non-linguistic agents in the world. Nothing Davidson or Brandom have said really cast any doubt on the pragmatic approach I am recommending here. Given this fact I think that their assertion that non-linguistic creatures do not have propositional attitudes is unmotivated. Better to simply use the pragmatic approach that it is useful to conceive non-linguistic creatures as well as linguistic creatures as intentional agents, and that this approach is justified on explanatory and predictive grounds.

In my next blog I will consider Davidson’s argument that the logic of belief/desire ascriptions, their referentially opacity which does not apply to non-linguistic creatures, shows that they do not think. I will demonstrate that this argument does not work. In my final blog on this topic I will critique Dennett’s linguaform conception of consciousness and show why it does not work.

Chomsky, Russell and Platonic Propositions

CHOMSKY, RUSSELL AND PLATONIC PROPOSITIONS

. Chomsky’s claim that humans are born with an innate mental lexicon which as a matter of empirical fact yields  analytic connections seems to commit him to the existence of propositions. In this blog I will evaluate the claim that Chomsky is committed to the existence of propositions. I will argue that though his linguistics theory does not commit him to the existence of Platonic propositions; he does nonetheless endorse the existence of a certain type of proposition which Quine would find objectionable.

I will begin by first outlining the notion of a Platonic proposition which was endorsed at the turn of the century by Bertrand Russell. Russell first conceived propositions as the non-linguistic entity that a sentence is about, so for example if a person says in English that ‘The Twin Towers were Bombed on Sept 11th 2001’, and another person says the same thing in French, they are both asserting the same true proposition. According to the early Russell a proposition is a non-linguistic entity that sentences are about. Further examples will make the point clear, if on September the 10th 2001 Osama Bin Laden said ‘The twin towers will be bombed on September the 11th 2001’ and if I say today ‘The twin towers were bombed on September the 11th 2001’ we are taking different attitudes towards the same proposition. Propositions like sentences have constituents which are related to each other in certain ways. According to Russell circa 1903 the constituents of Propositions do not designate the constituents of states of affairs, rather they encompass them. The proposition ‘Socrates is in the field’ contains ‘Socrates’ the relation ‘is in’ and ‘the field’. It is important to note that the objects that enter into a proposition are neither true nor false; it is only the proposition which is true or false. According to the early Russell in order for a person to grasp a proposition we need to be acquainted with its constituents.

The two conditions which Russell put on something being a proposition are that (1) The structure of the sentence must mirror the structure of the proposition and (2) We must be acquainted with each of the constituents of the propositions. There are obvious difficulties with condition number (2), consider propositions such as ‘all men are mortal’, or ‘all prime numbers have a successor’ we can grasp the truth of these propositions; however we are not acquainted with ‘all men’ or ‘all prime numbers’. To meet this difficulty Russell postulated entities called Denoting Concepts, we could be acquainted with these concepts, and they could denote entities which fell beyond our ken. The price which Russell had to pay for the postulation of Denoting Concepts was that, it complicated his theory of Propositions which now consisted of entities we are acquainted with (objects) and Denoting Concepts which we are also acquainted with and which refer to the entities we were not directly acquainted with, such ‘all men’ ‘infinite prime number’ etc. As is well known in Russell’s famous Grey’s Elegy argument he showed that his postulation of Denoting Concepts lead to more trouble than it was worth. Having cleaned this mess up Russell was able to show that we are not acquainted with Denoting Concepts rather a proposition was a complex of objects arranged in a certain relation. He showed that the grammatical form of a sentence was misleading us and hiding the logical form of the sentence. Consider the sentence ‘Socrates is Mortal’ from a grammatical point of view this sentence would seem to have two constituents; the subject ‘Socrates’, and the predicate ‘is Mortal’, however on Russell’s analysis it consists of an existential quantifier, and a function (name) which satisfies the unsaturated predicate. This method of analysis has the advantage that it helps us assign a truth or falsehood to sentences with names that don’t refer by analysing them as indefinite descriptions. So for Russell the logical form of a sentence represents the unambiguous proposition which the sentence is ambiguously trying to assert. This picture gives philosophy the job of discovering which sentences are true propositions and which are not; prompting the early Wittgenstein to state ‘all philosophy is a critique of language’.

Now Quine famously claimed that one could benefit from Russell’s logic, and, explain truth and falsehood (along Tarski’s lines) without admitting things such as propositions into our ontology. His indeterminacy of translation argument was explicitly designed to show that propositions are non explanatory and have poor identity conditions and hence should not be admitted into our ontology.

When analysing whether Chomsky is committed to the existence of propositions it is important to note that Chomsky’s primary concern is not with the epistemological issues which interested Russell. Furthermore he is not interested in mapping word-world relations, language can only fruitfully be understood if a certain amount of idealisation occurs, and the domain of research is limited. For example if a person is interested in studying the structure of the foot, they will not exhaustively try to map every piece of interaction between the foot and the world, rather they will treat the foot as a biological aspect of the person and proceed to investigate its internal structure. Now for Chomsky language is just a biological aspect of the human species like any other, and hence should be studied in the same manner as every other aspect of the human species. Now in order to study language we need to determine what the main features of the language are, and to then construct an abstract model to explain how those features are possible. The central features of language are first the fact that it is compositional; the meaning of the sentence is determined by the meaning of its constituents and the rules used to combine them. A language also involves the ability to express an infinite amount of utterances using finite means. Humans born into any normal human environment will begin to speak at certain predictable times, and require very little explicit instruction to do so. The converse situation exists with other primates; who even with explicit teaching can learn only extremely rudimentary linguistic abilities. A further fact about language which needs to be explained is the fact that humans brought up in different environments seem to speak different languages. These are the basic facts of language that Chomsky sets out to explain in his theory of generative grammar, and Chomsky nowhere claims that Platonic Propositions are helpful in addressing these questions; in fact he has in numerous different places criticised people such as Katz and Postal for holding that language is an abstract object.

Before considering Chomsky’s position I will first consider a possible way that Platonic propositions can be used to explain the above four facts about language. The compositionality requirement could be explained by the fact that the grammatical form of sentences mirrors the logical form of propositions. So consider the sentence ‘Socrates is bald’ this sentence could mirror a proposition which contained the Particular Socrates, and two Universals the relation ‘is’ and the attribute ‘bald’. Or the sentence could consist of the particular the subject Socrates, and the Universal the predicate ‘is bald’. The two seemingly contradictory facts that humans seem to be able to speak without explicit instruction, and yet when born in different environments they speak different languages can also be handled by postulating propositions. When an English speaker says ‘Socrates is bald’, and a French man speaks its equivalent they are both speaking the same proposition but using different sounds to do so. And the Platonic doctrine that we are born with innate knowledge could be used to explain why we do not need explicit instruction to speak natural languages, humans it could be claimed are born with rational intuition and hence learn how to discover propositions in the world and represent them using arbitrary symbols. All that now needs to be explained is our ability to use finite means to express infinite ideas, now this ability is the real sticking point, when Russell wanted to explain our ability to grasp propositions such as ‘all prime numbers have a successor’ he was lead to postulating ‘Denoting Concepts’ which we were acquainted with, and which denoted the said numbers. So instead of being acquainted with an infinite amount of propositions about prime numbers we are acquainted with the Denoting Concepts. An obvious difficulty with the above explanation is the fact that if one accepts it, one can no longer accept the above view that grammatical sentences mirror the logical structure of propositions. One is left with logical form which is hidden by the grammatical form of a sentence. This being the case one no longer has an explanation of the compositionality of sentences.

We need to explain why sentences are structured the way that they are, if they don’t mirror the logical form of propositions one could claim that we adopt the structure we do because they are arbitrary conventions, and that the conventions as a matter of contingent fact do not mirror the logical structure of propositions. However, if one accepts this conventionalist view one is committed to the view that language is learned, and one must then be able to sketch a plausible learning theory account of language acquisition. The above discussion indicates that Platonic propositions as explanatory tools cannot explain all of the important facts about language, though they do seem to offer some kind of explanation of at least some of them. Furthermore as the later Wittgenstein recognized not all linguistic usage involves expressing propositions, he used the metaphor of language being a tool rather than a picture of reality, and for the most part he was correct. The fact is that linguistic usage is in inherently novel, and is not constrained to merely the of picturing facts. So much more than Platonic propositions are needed to explain linguistic ability.

Chomsky’s explanation of four the core features of language involve a shift towards internalistic explanation rather than the externalistic explanations which are common in philosophy. Firstly he agreed with Plato that to explain our linguistic ability one needs to postulate that the person comes to the learning situation with innate apparatus. So if people are all born with the same innate ability to speak language it follows that a person born in France and a person born in England could both be saying the exact same thing but sounding different not because they both grasp the same proposition which they use different sounds to express, rather they are both speaking the same sentence, which is structured differently because of parametric variations which are fixed depending on the type of data one is exposed to in ones childhood environment. The fact that we can express infinite ideas using finite means can be explained by making the rules which underlie this recursion explicit. Likewise the fact of compositionality can be explained in terms of innate features of the lexicon.

Chomsky argues for a language faculty with his poverty of stimulus argument I have criticised this argument in an earlier blog and will not repeat the material here. I will now proceed to show the nature of the language faculty Chomsky postulates, and demonstrate that it in no way relies on the notion of a Platonic proposition. Chomsky argues that this innate apparatus is genetically programmed and wired into our brains, and it determines how we speak and understand language.  Two of the most obvious features of language which all linguists recognise are that, we can understand and speak an infinite amount of sentences, and this is despite the fact that we have only a finite resources available in our brains. And two sentences have a fixed structure only words can only go together in a certain order and still be grammatical. Chomsky tried to capture these features in his pioneering work Syntactic Structures where he made these facts explicit. His aim was to be able to enumerate certain principles and then to be able to predict how certain sentences should go together and to test sentences to see if these predictions are accurate. According to Chomsky people’s grammatical intuitions are the raw data we have to work with, and the principles and parameters approach is the explanatory theory we construct to explain the data. Chomsky’s first insight into language is that it is not just a string of words which we learn inductively, rather we automatically group words into what he calls phrases. He argues for this position by constructing mathematical models of how language could be learned as a string of words inductively, and then proving that such models are not possible[1]. He called the model a Finite State Grammar. A Finite State Grammar consists of the following five features, (1) an initial state. (2) A finite number of states (3) A specification of transitions form one state to another (4) A specification of an initial symbol that will be printed when a particular transition obtains (5) A final state. (Lasnik: SSR p 12). In Syntactic Structures Chomsky constructed a finite state grammar consisting of (5) states. In the initial state the only option for the machine is to proceed to state (2), and when this is done, the machine must print the symbol ‘The’. From stage (2) the machine has two options go to stage (3) or stage (5), if it goes to stage (3) ‘man’ is printed, if it goes to stage (5) ‘men’ is printed. Once the machine has taken the option of either (3) or (5) its next option is determined, (3) must go to stage (4) at which point ‘comes’ is printed, and (5) must go to stage (4) at which point ‘come’ is printed. So there are two possible sentences that the finite state grammar will allow (A) The man comes. (B) The men come. The question is can such a machine be modified to make it capable of capturing infinity?

Now it is obviously easy to extend such a machine so that it can capture infinity. At stage (2) a third option can be added. The machine will now have three options It can go from (2) to (3), (2) to (5), or it can loop back on itself (2) back to (2), now the rule can be added that every time (2) loops back on itself the symbol old is printed. The finite state grammar now allows for infinite constructions. It can now print (A) The man comes, (C) The old man comes, or (D) The old old man comes, and so on to infinity. Now while such a device is obviously useful, and can capture an important feature of language (the ability to construct infinite sentences by finite means) it has obvious weakness. It cannot grasp certain obvious grammatical features of ordinary language, such as embedding, and cross serial dependencies.

Consider[2] the following phrase ‘an anti missile missile’ which is a missile used to defend against missile attacks. Now presumably if I knew that my enemies had ‘an anti-missile missile’, I would proceed to try and develop ‘an anti-anti missile missile missile’. As Lasnik put it ‘There are presumably technological limits on how far one can go in creating this kind of weapon, but there are no linguistic limits as far as the corresponding word formation process is concerned. Lets notate the situation as follows: (45) antiⁿ missileⁿ+1. (Lasnik p15). The important point to note is that a finite state grammar cannot capture this basic grammatical feature of language. If one were to construct a grammar with a loop at the end of anti, and one at the end of missile one would be able to create as many ‘anti’s’ or as many ‘missiles’ as one wanted, however one would not be able to correlate the two in the way suggested by (45) without adding an abstract phrase structures. In order to explain the features of language we must assume that words are grouped together in terms of abstract phrase structure rules which determine the way they can be grouped together.

As I have already said finite state grammars had no way of keeping track of how many ‘anti’s’ and how many ‘missiles’ there was in the sentence. Context free Phrase structure Grammars can perform this deed, and they do so by introducing the two words at the same time. Consider the above grammar (a) Ʃ:S and (b) F→aSb, now the initial designated symbol is S is what is known as an abstract non-terminal symbol which will not be part of the sentence, so it will have to be rewritten as a sequence of terminal symbols. So as can be seen from (b) we are to rewrite S as aSb. Now from (a) and (b) we construct the following derivation. We will call our derivation grammar x, and it has the following structure,

Grammar X: Step one: S (following a)Step two: aSb (following b)Step three: aaSbb (by reapplying b step2)Step four: aaaSbbb (reapplying b to 3)

Obviously we can carry on this sequence to infinity and using this process we can keep track of cross serial dependencies. The important point to note is that it is the abstract structure (cross serial dependencies) that makes it possible for the numbers of a’s and b’s to be correlated. This phrase structure grammar above is obviously very different from our natural language, so I will now use the same technique to construct a model which is a bit closer to natural language.

So let us begin the way we did above by stating what our initial symbol is and then outlining our rewrite rules. (1) Ʃ: S (2) S→NP VP

NP→N

VP→V

N→Arsene

N→Alex

V→Thinks

V→Laughs

V→Cries

Using the above rules one can construct the following sentences using what is known as a derivation. When one is using a derivation one basically tries to get rid of all of the non-terminal symbols.  So our derivation goes as follows;

           Step 1: S

                  Step 2: NP VP (S→NP VP)

                  Step 3: N VP    (NP→N)

                  Step 4: Mary VP (N→Mary)

                  Step 5: Mary V    (VP→V)

                  Step 6: Mary laughs (V→Laughs)

STOP

As Cedric Boeckx (Bare Syntax 2008) correctly notes the above grammars cannot capture infinity; however if we want to capture infinity all we need to do is introduce a rule that re introduces S; For example VP→S. With this proviso in place we can capture sentences such as Arsene thinks Alex laughs, or Alex cries Arsene laughs.  So our artificial model has moved beyond the limits of a finite state grammar and can capture things such as cross serial dependencies. However there are some aspects of grammar which phrase structure grammars cannot capture, such as unbounded cross serial dependencies.  In order to show what these unbounded cross serial dependencies are and how Chomsky accounts for them I will again give concrete examples from ordinary language.

In English there are three different types of Auxiliaries (1) Modal Auxiliaries: (can, must, may, will, etc) (2) Have (had etc) (3) BE (am, is, was, were). Now by examining how these auxiliaries can be combined with other words in sentences and with each other sentences and still remain grammatical according to the intuitions of ordinary English speakers Chomsky uncovered a series of generalisations.  Now what Chomsky did was to discover a series of facts about the behaviour of English Auxiliaries and then formulate general laws about these linguistic regularities. All English sentences have a main verb and a noun, such as sentence (1) Arsene laughed. Some English sentences in addition to having main verbs have what are known as Auxiliary verbs. Consider sentence (2) Arsene may laugh. Sentence 2 as well as a main verb contains a modal auxiliary may. A sentence can also contain more than one Auxiliary (3) Arsene may have laughed. So sentence (3) contains two Auxiliary verbs a main verb and a noun.  A sentence can even contain three Auxiliaries (4) Arsene may have been laughing.  Now what Chomsky discovered was that there are regularities which govern how these auxiliaries behave.

When the main verb is combined with certain auxiliaries its structure remains unchanged, however when it is combined with certain other types of Auxiliaries its structure altered. Consider the following constructions (Taken from Lasnik p36)

(X) (a) Arsene may laugh.

(b) Arsene will laugh

(c) Arsene can laugh

(d) Arsene could laugh

(Y)  (a) Arsene has laughed

(b) Arsene is laughing

(Z)   (a) Arsene had laughed

(b) Arsene was singing

Obviously the Modal Auxiliaries (henceforth M) do not modify the main verb whereas with the other Auxiliaries do  modify the main verb.  So from the behaviour of Auxiliaries Chomsky formulated some generalisations, such as the following. Generalisation 1: When a sentence contains a modal auxiliary (M) it is always the first thing after the subject. Generalisation 2: When have and be occur, be immediately follows have.

Now we can take a further look at examples like the above ones and see if we can find further generalisations. Consider for example sentences with no auxiliary verbs. (Q)(a) John owns a house (present)

(b) John owned a house (past)

(c) *John owned a house (bare)

(d)* John owning a house (progressive)

(e)* John owned a house (perfect)

From the above data we can derive Generalisation 3: If the main verb is the first verb like thing in the sentence, then it can appear in the ‘present’ or ‘past’ form but not in the ‘bare’ ‘progressive’ or ‘perfect’ form.(Lasnik p38) Chomsky discovered that generalisation 3 works also for   ‘all verb’ like things such as Modal auxiliaries, have and be. So from this fact he abstracted to Generalisation 4: What ever ‘verb like’ thing is first in the sentence, it will appear in the present or past form. From here Chomsky examined whether he could find generalisations for the second ‘verb like’ thing in a sentence, and for a third verb like thing in a sentence. It is his generalisation he discovered for the third ‘verb like’ thing which concerns us here. Generalisation 5: Whenever BE occurs, the next ‘verb like’ thing appears in the ‘progressive’ form. So when a sentence has BE in it has ing. As Lasnik put it Be and ing go together but they don’t go together.

The above point can be illustrated using concrete examples, take the following sentence (1) Arsene has been laughing. As a direct result of the BE auxiliary being used the ing gets affixed to the verb laugh, and this is a concrete example of unbounded cross serial dependencies. Now given that phrase structure grammars cannot capture these unbounded cross serial dependencies in a non-ad hoc manner we need to use a different approach to capture this fact about language. And the fact which Chomsky appeals to in order to capture this feature of language is what he calls Transformions. It is at this point that Chomsky introduces his division between deep structure and surface structure, at the level of deep structure a sentence like (1) above will have the following structure; Arsene (has en) (be ing) laugh. It is through applying transformations that the sentence gets translated into the surface structure form of Arsene has been laughing, now it should be patently obvious that this deep/surface structure has nothing in common with the distinction between logical form and grammatical form.

            The above grammatical rules, and generalisations which Chomsky discovered led him to the conclusion that as we learn new words we automatically group them into phrases such as Noun Phrases, Verb Phrases, Preposition Phrases, and Adjective Phrases. A Noun Phrase is any phrase that is about the noun such as the phrase ‘the logical philosopher’. A noun Phrase can be broken down into the following structure NP→ (DET) A*N11, which means that a Noun Phrase consists of an optional determiner, any number of adjectives, and a noun. Further definitions are VP→ V NP, and S→NP VP. So a sentence can be broke down into various phrases which can be further broken down to words operating according to certain rules. A strange discovery was made by Chomsky when analysing these rules, the various phrases were all discovered to have the same structure.

Stephen Pinker summarised the rules that all phrases share in his book The Language Instinct, he claimed that there are four such rules. The rules are as follows (1) All Phrases have a head (2) All phrases have some role-players which are grouped with the head inside a subphrase (3) All phrases have modifiers which appear outside the subphrases (4) All phrases have a subject (The Language Instinct p110). I will explain these four rules in more detail before further discussing their significance.

The first rule that all phrases have a head is in a sense obvious; this simply means that the phrase is about the head. So the NP ‘The logical Philosopher’ is not about the determiner ‘The’ nor is it about the adjective ‘logical’, rather it is about the Noun ‘Philosopher’. The noun is the head of the phrase and the adjective is its complement. To understand the second rule consider the following sentence ‘Dave drank a pint of Vodka’, the sentence consists of a subject (Dave) who plays the role of agent, and an object (Vodka) which is a Patient. The Vodka plays the role of a patient because something is being done to it, while the subject is the agent because it is doing something to the object hence it has the role of agent. More complicated sentences can be broken down into the thematic roles of Agent, Patient and Recipient, e.g. ‘Van Persie passed the ball to Ramsey’ Van Persie (agent) the ball (patient) and Ramsey (Recipient). The sentence can be thought of in similar terms as Frege thought of it; as being broken down into function and argument. So the preceding sentence can be thought of as consisting of the three placed predicate ‘x passed y to z’, this incomplete predicate is a function which becomes complete when the arguments (appropriate names) are mapped on to the variables. The variables in this case are the three nouns which are various role players, the agent, the patient, and the recipient. The type of role the various names will play will be determined by the lexical information that is stored in the brain about the predicate, for example passed will require an agent, a recipient and a patient, whereas drank just requires an agent and a patient. These role players are grouped with the head inside a sub phrase called an N-bar or V-bar.

The third rule is that all phrases have modifiers which exist outside of the sub phrase. Consider the following PP ‘from France’ this prepositional phrase is what is known as a modifier. The sentence ‘The captain of Arsenal from France’, can be schematised as follows S→(NP (Det the (Noun’ captain)( PP of Arsenal))(PP from France)). The PP ‘of Arsenal’ is about the noun ‘the captain’, to be a captain you have to be a captain of something, so they are intrinsically connected. However, the modifier ‘from France’ is not intrinsically connected to ‘The captain’ so is not grouped in the same phrase. So while the modifier is still part of the NP, it is grouped on another branch. The fourth rule is that subjects are given a special role in phrases, the subject is usually a causal agent, and is represented as Spec.

As Pinker noted that what is interesting about these rules is that all phrases share them, whether they are a NP, VP, AP or a PP, and that the rules which all of these Phrases are governed by must be an abstract set of principles. The principles are represented by an abstract schema called X bar theory, XP→ (SPEC) X’ YP which states that ‘A phrase consists of an optional subject, followed by an X bar, followed by any number of modifiers’12. It is these abstract principles which govern how words are grouped in sentences.

It may seem that the posit of underlying principles which govern our language use seems to be contradicted by the fact that people brought up in different countries speak different languages, Chomsky accounts for this fact with his discussion of parameters. Adriana Belletti and Luigi Rizza describe the nature of a parameter in the following manner ‘The child interprets the incoming linguistic data through the analytic devices provided by Universal Grammar, and fixes the parameters of the system on the basis of the analysed data, his linguistic experience. Acquiring a language thus means selecting, among the options generated by the mind, those which match experience, and discarding other options’ (Language and Mind P17). A simplified example of a parameter is what is known as ‘The movement parameter’, consider question formation; when forming questions human languages have two different options to choose from, the first option is to move the interrogative phrase (who etc) to the front, to a position in the left periphery of the clause, or leave the interrogative phrase in the clause-internal argument position in which it is interpreted (ibid p17). English takes the first route, (Who did you meet?), Chinese the second (You love who?), while French uses both rules at the same time. There have been no known languages which have violated this parameter so one can logically assume that it is an aspect of universal grammar. This in short is a non technical explanation of how Chomsky believes that a child goes from his meagre input to the torrential output (optiminal language expression).

The important point to note is that nowhere in the brief sketch above is the notion of a Platonic proposition needed. So Chomsky can be cleared of a charge of postulating Platonic propositions that Quine would find objectionable. However Quine is not just concerned with propositions as abstract entities as the following quote shows:

My objection to recognising propositions does not arise primarily from philosophical parsinomy- from a desire to dream of no more things than are in heaven and earth than need be. Nor does it arise, more specifically, from particularism- from a disapproval of intangible or abstract entities. My objection is more urgent. If there were propositions, they would induce a relation of synonomy or equivalence between sentences themselves: those sentences would be equivalent which expressed the same proposition. Now my objection would be that the appropriate equivalence relation makes no objective sense at the level of sentences’(Quine: The Philosophy of Logic p. 2 ).

Chomsky does accept the existence of propositions in terms of determinate synonomy relations in a way which contradicts Quine. I have shown in two eariler blogs ‘Indeterminacy of Translation and Innate Concepts’ and ‘Chomsky and Quine on Analyticity’ that Chomsky’s views on these topics are not justified empirically. Here my main point was to show that Chomsky is not commited to the existence of Platonic Propositions.

 


[1] Modern mathematical research has refuted Chomsky’s claims that recursion cannot be learned statistically.

[2] Here I am following Lasnik’s Syntactic Structures Revisited.

Hacker and Bennett on Mental Imagery

The Indeterminacy of Translation and Innate Concep

Ordinary Language Philosophy and Mental Imagery

Hacker and Bennett both argue against claims that imagery depictational. This view strongly contrasts with the experimental work of Kossyln and Ganis who claim that their neuroscientific research shows that topographically organised neuronal configurations which map on to visual patterns in the world show that our imagery is in fact depictational. However Hacker and Bennett argue against this view by noting mental imagery does not meet the criterion for being depictational.

They note that when something is depictational it has a depictational and non-depictational elements. Thus a picture which is used to depict a dog has some non-depictational elements such as the paper used, the paint used etc. As well as depictational elements such the spatial relations of the part of the picture and the color of the painting etc. The same is true of piece of writing, so the ink and paper or…

View original post 1,429 more words

Linguaform Philosophy of Mind Part 2

                      LINGUAFORM PHILOSOPHY OF MIND: PART 2

In my last blog I summarised three arguments which are typically used by philosophers to argue that having a language is necessary for thinking. In this blog I will criticise the first of these arguments and show that it does not work and hence are not conclusive evidence that language is necessary for thought. The three arguments were: (1) We do not have sufficient evidence to justify attributing an ontology to a child who has not mastered the syntax of quantification, (2) If we attribute thought on the basis of reliably discriminating aspects of their environment then we will end up attributing thought to thermometers etc. (3) When we attribute beliefs to non-linguistic creatures they end up behaving in an anomalous manner in relation to referentially opaque sentences. Each of these arguments require a detailed critique so I will give each argument its own blog.

 

Argument 1: An Experimental Reply:

In their 1990 paper ‘‘Ontological categories guide young children’s inductions of word meaning’’[1]  Soja, Carey, and Spelke tested whether the ontological distinction between objects and non-solid substances conditions the projection of word meanings prior to the child’s mastery of count/mass syntax. Quine denied that children make any ontological commitments prior to learning the syntax of quantification which helps them master the count/mass noun distinction. Here Quine is making an empirical claim that prior to grasping the count/mass distinction, an agent like Mother, a property like Red, and a non-solid substance like Water are on a par. According to Quine, it is only when a child has mastered the apparatus of divided reference through grasping the syntax of quantification that the child can distinguish these substances. Soja et al. set out to test these empirical claims of Quine’s. Before outlining their experiment I will outline Quine’s views on language acquisition.

                                             QUINE’S POSITION

 

Quine claims that when young children mouth words such as ‘Mama’, ‘Water’, or ‘Red’, we are in no position to state that they are using the words as terms which refer to the same things which we refer to by the sounds.

For though we may fully satisfy ourselves that the child has learned the trick of using the utterances ‘mama’, and ‘water’ strictly in the appropriate presences, or as a means of inducing the appropriate presences, we still have no right to construe these utterances in the child’s mouth as terms, at first, for things or substances. (1969, 7)

 

According to Quine, from our own mature perspective, we have come to view the child’s mother as a body which revisits the child from time to time, and water as a scattered object. However, from a behavioural perspective, we have little justification for imputing this ontology onto the child.  After making this negative point about our lack of justification for imputing our mature ontology onto the child, he then goes on to make a positive point about the nature of the child’s ontology.

But the mother, red, and water are for the infant all of a type: each is just a history of sporadic encounter, a scattered portion of what goes on. His first learning of the three words is uniformly a matter of learning how much what goes on about him counts as the mother, or as red, or as water. It is not for the child to say in the first case ‘Hello mama again’, in the second case ‘Hello another red thing’ and in the third place ‘Hello more water’. They are all on a par: Hello more mama, more red, and more water. (ibid., 7)

 

Here Quine is clearly claiming that young children who use words, such as ‘Mama’, ‘Red’ and ‘Water’ are not distinguishing them in terms of being respectively Objects, Properties, and Non-solid substances. His reason for arguing so is that we have no positive behavioural evidence to support the claim that children make such distinctions, and in the absence of such positive evidence, there is little reason to impute such a rich ontology to young children. Quine’s claim is that we should only attribute to children the ability to distinguish between Objects, Substances and Properties when we have behavioural evidence which supports us making this distinction.

Progressively, however, the child is seen to evolve a pattern of verbal behaviour that finally comes to copy ours too closely for there to be any sense in questioning the general sameness of conceptual scheme. For perspective on our own objectifying apparatus we may consider what steps of development make the difference between ‘mama’-babbling infant who cannot be said to be using terms for objects, and the older child who can. It is only when the child has got on to the full and proper use of individuative terms like ‘apple’ that he can properly be said to have taken to using terms, and speaking of objects. Words like ‘apple’ are not words like ‘mama’ or ‘water’ or ‘red’ are terms whose ontological involvement runs deep. To learn ‘apple’ it is not enough to have learned how much of what goes on around you counts as apple; we must learn how much counts as an apple, and how much as another. Such terms possess built in modes of individuation.  (ibid., 8)

 

Now Quine acknowledges that the child may learn ‘apple’ in the same way that he learns ‘mama’ or ‘red’ but he goes on to say that the child will never master ‘apple’ in its individuative use until he gets on with the scheme of enduring physical objects. And in order to get on with the scheme of enduring physical objects, the child will need to master the apparatus of identity, difference etc.  Quine claims that to be able to tell if the child has got the trick of individuation down, we need the following:

How can we ever tell if the child has got the trick of individuation? Only by engaging him in sophisticated discourse of ‘that apple’, ‘not that apple’, ‘an apple’, ‘same apple’, ‘these apples’, ‘another apple’. It is only at this level that a palpable difference emerges between genuinely individuative use and the counterfeits lately imagined. (ibid., 9)

 

It is at this stage that Quine claims that we are justified in attributing an ontology to the child. Prior to that, attributing an ontology to the child is making an unsupported conjecture which is not justified by the facts.  For Quine, our child learns the adjectives ‘same’, ‘another’, ‘an’, ‘that’, ‘not that’ contextually. First the child gets used to various longer phrases which contain them, and he gradually develops appropriate habits in relation to the component words as common parts and residues of those longer forms. He further speculates that the contextual learning of all of these various different particles goes on simultaneously, so that we gradually adjust them to each other as a coherent pattern of usage is evolved (ibid., 10). So the story of child ontology as Quine tells it is that the child’s words just represent scattered portions of what goes on and do not distinguish between Objects, Properties and Substances.

So Quine’s picture of a child learning language and the ontology which is implicit in this language involves pared down assumptions according to which we attribute to the child no more than is necessary to explain his verbal behaviour. Quine treats the babbling which a child begins to emit at the age of 12 months as a form of operant behaviour which is omitted rather than elicited. He claims that the family of the child will reinforce the child’s verbal behaviour (such reinforcement made possible by the child’s pre-linguistic quality space) in such a manner that the child’s use of observation sentences such as ‘mama’ will reliably distinguish between ‘mama’ portions of the environment, and ‘non-mama’ portions. However, at this stage we cannot credit the child with having an ontology; from the point of view of external verbal behaviour we have no reason to attribute to the child a concept of ‘Mama’ as a name of a spatio-temporal object, as opposed to being a name of a mere mass term like ‘Water’.  It is only when we engage the child in discourse and he can answer the questions using terms such as ‘not that mama’, ‘same mama’, ‘another mama’ etc., that we are justified in attributing to the child a concept of ‘mama’ as an object as opposed to scattered portion of mama environment etc.

                                    TESTING QUINE’S CLAIMS

Soja et al. conducted their experiments to test Quine’s claim (1960, 1969) that young children only develop an ontology after they have grasped the syntax of quantification.  Contrary to Quine, they claimed that young children have a distinction between different ontological categories prior to grasping the syntax of quantification, and that in fact these ontological categories constrain the process of language learning. They distinguished their views from Quine’s in the following way:

According to Quine, then, when children hear a new word, the meaning they assign to it is determined by procedure 0:

Procedure 0: Conclude that the word refers to aspects of the world that share salient properties of the perceptual experience when the word is used. (1991, 182)

 

Soja et al. proposed a different view of the procedures children use when they learn a new word; their procedure assumed that the child had ontological categories prior to learning the syntax of quantification.

Procedure 1    Step 1: Test to see if the speaker could be talking about a solid object; if yes,

Step 2: Conclude the word refers to individual whole objects of the same type as the referent.

Procedure 2 Step 1: Test to see if the speaker could be talking about a non-solid substance; if yes,

Step 2: Conclude the word refers to portions of substance of the same type as the referent.  (ibid., 183)

 

Soja et al. proposed an experiment which would decide between these two different proposals about how children learn new words.

One way for Soja et al. to test whether Quine was correct, or whether they were correct, was to test how children generalised when they learned words for different objects. If children could generalize prior to a grasp of the count mass syntax this would be evidence that Quine was wrong. They ensured that the experiment was done on children who are below the age of 2 ½, the age at which children master the syntax of quantification.  They tested how children generalise words to non-solid substances as well as to objects. If Quine is right that children generalise names by using Procedure 0, then children will generalize names based on shape whether the name originally refers to an object or a non-solid substance.

THE EXPERIMENT

Twenty-four 2-year-olds from the Greater Boston area were recruited and randomly placed into two groups (informative syntax groups, and neutral syntax groups), with equal numbers of boys and girls in each group. Each testing session began with two familiar trials: one object trial and one substance trial. The stimuli in the familiar object trial were a blue plastic cup, a white Styrofoam cup and cup pieces. The stimuli in the familiar non-solid substance trial were peanut butter and Play-doh. These trials followed the same format as the unfamiliar trials described below.  The two familiar trials were followed by eight unfamiliar trials: four object trials and four substance trials which were intermingled. The subjects were tested on each trial on two separate occasions. Eight novel words were used: ‘blicket’, ‘stad’, ‘mell’, ‘Coodle’, ‘doff’, ‘tanninn’, ‘fitch’, and ‘tulver’ (ibid., 187)

TEST 1: AN UNFAMILIAR OBJECT TRIAL IN THE NEUTRAL SYNTAX CONDITION

The test involved presenting the child with an unfamiliar object, e.g. a plumbing T-shaped pipe, and giving the child a name for the object, e.g. blicket. In the neutral syntax condition the child is told ‘This is my blicket’. The experimenter then continued to talk about the object using ‘my’ ‘the’ and ‘this’ as determiners. She and the subject then manipulated the object. The object was then placed to the side and two other sets of objects were then presented to the subject.  One set consisted of objects of the same sort as the original but made of a different material, e.g. a plastic T shape; the other set consisted of objects of the same material but a different shape i.e. bits of metal.  The experimenter then said ‘Point to the blicket’

TEST 2: AN UNFAMILIAR SUBSTANCE TRIAL IN THE NEUTRAL SYNTAX CONDITION

The child was shown an unfamiliar substance, and told ‘This is my stad’. The experimenters referred to the substance using only the determiners ‘my’ ‘the’ and ‘this’. The experimenter and the subject talked about the substance and played with it.  In the presentation of test substances, the subject was shown two substances, the original and the new one, and told ‘Point to the stad’.  The original substance was in the alternative configuration, whereas the new substance was in the configuration used originally with the named substance.  There were four pairs of substances: (1) Dippity-do (a setting gel), and lumpy Nivea (a hand cream mixed with gravel), (2) Coffee (freeze dried) and Orzo (a rice shaped pasta), (3) Sawdust and leather (cut to tiny pieces), (4) Crazy foam and Clay. Of each pair one member was named and the other was used as the alternative to the original in the test presentation. Each member served in both roles across subjects.

TEST 3: OBJECT AND SUBSTANCE TRIALS IN THE INFORMATIVE SYNTAX CONDITION

This condition differed from the neutral syntax condition only in the determiners and quantifiers used when naming the original stimulus. The experimenter introduced an object trial in the informative syntax condition with ‘This is a blicket’ and used ‘A blicket’ and ‘Another blicket’ in subsequent discussions. Substance trials in the informative syntax condition were introduced with ‘This is stad’ and in subsequent discussion the experimenter continued to omit determiners or use ‘some’ or ‘some more’. This was the only difference between the different informative and uninformative trials. In the familiar word trial subjects differentiated the object and the substance trials as predicted.

                                   WORD LEARNING TRIALS

Subjects differentiated the two types of trials.  Responses were consistent with shape and number on the object trials, and were not consistent with shape and number in the substance trials.

                                     WHAT THE TEST SHOWS

If before the child has grasped the syntax of quantification the child differentiates in the above manner, this shows that the child is not generalizing the word-based perceptual similarity, but is doing so based on the type of object he is presented with. So, for example, if he was generalizing according to an innate perceptual similarity quality space which focuses on shape then why does this not work for substances? The answer is because the child recognises that objects and substances are distinct ontological categories. Soja et al. summed up their results as follows:

In sum, the children chose according to object type when the stimulus was an object and according to substance type when the stimulus was a non-solid substance. There was no effect of the syntactic context: performance was neither facilitated nor hindered by the additional syntactic information.

The data from Experiment 1 show that different inferences about the meaning of a newly heard word are drawn according to the ontological status of its referent. If the word refers to an object, the child’s projection respects shape and number, and ignores texture, color, and substance. If the word refers to a non-solid substance, the child’s projection ignores shape and number, respecting texture, color and substance. (ibid., 192)

 

From this experiment, Soja, et al. claim to have shown that Quine’s view of how children learn language is incorrect because the experiment shows that, contrary to what Quine claims; children do indeed have a distinction between different ontological categories prior to grasping count/mass syntax. It also shows that these innate ontological categories are what help a child learn a language and not the apparatus of quantification. Soja et al.’s experiment purports to have shown that children learn words according to ontological distinctions which they exhibit knowledge of prior to learning a language.  So their experiment strongly indicates that Quine is wrong on the issue of whether a child needs to master the syntax of quantification in order to have ontological commitments. It should be noted that this experiment was done on a very small sample so it needs to be replicated and done on different samples before it can be considered a conclusive refutation of Quine’s position. Nonetheless the experiment is extremely suggestive and does indicate that Quine is wrong.

In the next blog I will consider argument 2 the thermometer argument and show how it can be handled by adopting the intentional stance. I will consider objections to the intentional stance reply in particular Donald Davidson’s intentional realism. In the third blog I will consider the referential opacity objection.


 

 

Linguaform Philosophy of Mind Part 1

LINGUAFORM PHILOSOPHY AND WHY LANGUAGE IS SUPPOSEDLY NECESSARY FOR THOUGHT

My friend Matt Bush when criticising analytic metaphysics plausibly accuses a lot of them of being too deeply wedded to interpreting reality in terms of language. Matt’s criticisms are in my view entirely to the point.  Since the birth of analytic philosophy the analysis of language as a way of understanding philosophical problems has been a central foundation. Philosophers like Frege and Russell by constructing artificial languages to help them speak more accurately about the foundations of mathematics set the ball rolling with analytic philosophy’s obsession with language. Wittgenstein  in his ‘Tractatus’ using the insights of Russell and Frege tried to construct a theory of meaning for language which he used to demarcate between statements with sense and nonsensical statements. His later ‘Philosophical Investigations’ took the facts of ordinary language more seriously and focused more on the conditions of sensible use of language as opposed to the more abstract logical view of language set out in his ‘Tractatus’. Nonetheless, in both the earlier and later phase of his philosophical development Wittgenstein thought that an analysis of language was the way to solve philosophical problems. Non-naturalistic philosophers like Austin, Ryle and Hacker have followed him in this approach to philosophy.

I have discussed ordinary language philosophy in detail in other blogs so I will not go into it in this blog. More naturalistic inclined philosophers have followed Frege, Russell and Wittgenstein in making language central to philosophy. They have used language as ways of trying to solve philosophical issues in both ontology and epistemology. Matt has criticised non-naturalistic analytic metaphysicians attempts to understand metaphysics through linguistic analysis. I share his misgivings when it comes to analytic metaphysics, though that will not be my focus here, I will instead focus on a misplaced emphasis on language in naturalistic philosophy of mind. The emphasis on language which has been with analytic philosophy from the start I will argue has had a distorting influence on how naturalistic philosophers have thought about the mind.

Before proceeding I should add that I think that the linguistic turn while excessive has lead to some real improvements in the way that philosophy is done. So my aim here is not to slate linguistic philosophy, merely to curb some of its excesses in the philosophy of the mind. In the second part of this blog I will argue that naturalistic analytic philosophers have vastly overestimated the degree to which cognition is tied to linguistic abilities. However prior to doing this I will firstly try to summarise the best arguments in favour of the view that thought requires language.

Contemporary(ish) philosophers who make language central to cognition such as Davidson, Dennett, Rorty, and Brandom have two primary influences Wilfrid Sellars and W.V. Quine. I will now try to outline some of the best arguments from the preceding six thinkers that language is central to thought.

In his 1956 ‘Empiricism and The Philosophy of Mind’, Sellars argued that if we want to interpret a creature to being a thinker we need to do so in terms of language. Here is Sellars on the topic[1]:

“All awareness of sorts, resemblances, facts, etc., in short all awareness of abstract entities-indeed, all awareness even of particulars-is a linguistic affair. According to it, not even the awareness of such sorts, resemblances, and facts as pertain to immediate experience is presupposed by the process of acquiring the use of language” (Science, Perception, and Reality, p, 160)

Sellars notes that we can distinguish between two different kinds of awareness. The first type of awareness is the ability to reliably distinguish between aspects of ones environment. The second type of awareness is awareness that comes from being in the logical space of reasons and having the ability to justify what one claims or believes about the world.

Obviously when one considers Sellars first two types of awareness the first does not require language while the second does. Sellars argues that the first sense of awareness is so permissive that it includes virtually everything as aware. In his (1995) ‘The Problem of Objectivity’ Davidson agrees with Sellars that awareness 1 (classification) is too permissive to be considered a criterion for saying what creatures are aware:

“My reason for resisting this usage is that if we were to accept it we would be committed to holding that the simplest animals have concepts: even an earthworm, which has so little brain that, if cut in two, each part behaves as the undivided whole did, would have the concepts of dry and moist, of the edible and inedible. Indeed, we would have to credit tomato plants or sunflowers with the concepts of day and night.” (Davidson: 1995 p. 8)

Robert Brandom in his (1995) ‘Making it Explicit’ makes a similar point:

“Reliable differential responsive dispositions are only a necessary condition for observational knowledge. Parrots and thermometers can have such dispositions and so can be used by us in effect as measuring instruments to acquire knowledge. But what they have is not knowledge. For they do not understand the significance of their responses; they do not take these responses as reasons for further claims; and they do not understand claims as potentially in need of reasons” (Brandom: 1995 p. 215)

So philosophers like Rorty, Davidson and Brandom amongst many many others follow Sellars in thinking that a creature who we discover through behavioural tests can reliably distinguish between an aspect of their environment is not necessarily aware. For true awareness to occur a creature needs to have the ability to place themselves in the space of reasons.

Sellars argument that if we use the criterion of being able reliably distinguish between aspects of their environment as the criterion of awareness then we will be committed to the view that thermometers and plants are aware, has convinced a lot of philosophers that justification is the best criterion[2]. So Sellars argument is one of the key reasons that philosophers have for thinking that language is central to thought.

However Donald Davidson merges the views of Quine with those of Sellars to provide further arguments that language is necessary for thought. Quine argued that prior to mastering the syntax of quantification we have no justification for imputing an ontology to a child. A child may learn to mouth a sound e.g. ‘MAMA’ in certain intersubjectively salient circumstances, the child may even internalise recursive abilities. However these abilities do not prove that the child has an ontology. When the child says ‘MAMA’ we have no behavioural evidence which indicates whether the child is using it as a term,  or as an observation sentence, as a count noun, a mass term, etc. Davidson following Quine argues that:

“What calls for ontology is the apparatus of pronouns and cross reference in natural languages, what we represent by the symbolism of quantifiers and variables in elementary logic. These devices provide the resources for constructing complex predicates, and at this point semantics must map names and predicates on to objects.” (Davidson: 2001 p. 140)

While Sellars argument above has a lot to offer, Quine’s argument is manifestly false and has been shown to be so by the experimental work of Soja et al. 1991 which I will review in the next blog.

A further argument against attributing propositional attitudes to non-linguistic creatures is the difference between the logic of referential opacity in linguistic and non-linguistic creatures. So, for example, consider the statement (1) John believes that ‘Batman caught the Joker’, and (2) John believes that ‘Bruce Wayne caught the Joker’.  Sentence (1) can be true while sentence (2) can be false. This is because even though Bruce Wayne and Batman refer to the same person John may not know this. We can see from the preceding examples that when quantifying into belief contexts referential opacity obtains (the same is true of quantifying into modal contexts). When dealing with ordinary sentences referential transparency obtains. Thus consider (3) Batman caught the Joker. And (4) Bruce Wayne caught the Joker. If Batman and Bruce Wayne refer to the same object then (3) and (4) must have the same truth value.

Davidson notes that the logic of belief ascription changes when we consider non-linguistic creatures. So he asks us to consider the case of a dog chasing a cat the cat runs up tree x and the dog who doesn’t see this starts barking up the tree at the cat. Davidson notes that a person viewing the scene could say that the dog is barking up the wrong tree. So, for example, I could say (5) The dog believes ‘that the cat is up the oak tree’, or (6) The dog believes that ‘the cat is up the tallest tree in the forest’. Now in the case of a linguistic creature, for example me, I may not know that tree x is both the tallest tree in the forest and an oak tree, so one could say that (5) and (6) are referentially opaque. However things are different with a non-linguistic creature. (5) and (6) do not behave in the same logical way as they do for linguistic creatures. We have no way of saying that (5) and (6) change their truth value in referentially opaque circumstances because there is no fact of the matter as to whether the dog thinks that ‘the tallest tree in the forest’ and ‘the oak tree’ refer to the same object. Davidson argues that since referentially opaque sentences changing their truth value in belief contexts does not apply to non-linguistic creatures then this shows that we MAY not be justified in applying propositional attitude explanations to such creatures.

So the three primary arguments that philosophers use for the claim that only language using creatures can think are (1) It is over inclusive to claim that creatures who can reliably distinguish between aspects of their environment have thoughts (thermometer objection) (2) Only creatures who can have mastered the syntax of quantification have an ontology (3) Belief ascriptions to non-linguistic creatures behave in logically anomalous ways which imply that creatures are not really believers.

In my next blog I will critique the above three arguments and show that they do not actually demonstrate that non-linguistic creatures cannot think. In my final blog of this series I will consider Dennett’s more subtle linguaform views as they apply to consciousness and will show that his views also do not work.


[1][1] Sellars quotes taken from Richard Rorty ‘Philosophy and The Mirror of Nature’ pp 182-192.

[2] Dennett bites the bullet and argues that there should be no problem with attributing intentionality to certain artefacts as is pragmatically needed.