Monthly Archives: July 2014

Human and Non-Human Animals Brain Interfaces

HUMAN ANIMAL AND NON-HUMAN ANIMAL BRAIN INTERFACES
“If a Lion could talk, we wouldn’t be able to understand it” (Wittgenstein: Philosophical Investigations p. 235)

In his recent paper Harish Shah a futurist speculated on the possibility of developing human to non-human brain interfaces as a means for humans to communicate with other non-human animals. He even speculated on animals of different species using the same technology to communicate with each-other. Shah was talking about a really interesting area of research. He quiet correct to note that our ability to control a mechanical arm using brain interfaces does indeed imply that we should be able to use similar technology between humans and non-human animals.
However I would like to sound a brief note of caution to his optimistic speculation. Firstly one needs to be very careful when one uses the word ‘communication’ to avoid using the same word in different contexts when the word means different things in different contexts. When I communicate to others using language there is a lot going on at the personal level (intentional stance) and the sub-person level (brain processes). So at the sub-personal level the Broca’s area in the cortex important for producing speech and Wernicke’s area in the cortex used for understanding speech. Now in producing speech different areas of the brain will be communicating with other, so my Wernicke’s area will be in communication with my temporal lobes and occipital lobes if my speech involves the use of any image. All of this is very crude and my reason for bring up this preschool model of brain areas communicating with each other is so I can emphasise how un-language like it is. When different brain areas communicate with each other they do so in-terms of things like topographic maps etc (see Paul Churchland’s ‘Plato’s Camera’). So when a brain interfaces with a robot arm it is important to note that the sub-personal level nothing language like, which involves consciousness awareness, occurs. The person in the experiment may be consciously aware of a desire to move his arm, or may be directed to do so by one of the scientists in the team, but the brain moves the arm because of un-language like computational interfaces between the brain and the robot arm. These facts are important to think about when assessing Shah’s speculations.
Shaw speculates as follows:
“Brain signals exist in animals, though perhaps operating in different ways or at different frequencies than humans. However it is a matter of time, before the same type of thought reading apparatus used for human-to-human interaction can, and will, be adjusted to identify, decipher and read animal thoughts.”
“And then now think, if we figure a way, to relay our thoughts or intended messages, to your pets or other animals, in a way, they can understand? Going to the zoo will become a very different experience from what it is today. Imagine if you could have a conversation with a lion. For example, you think the question, “How are you feeling today?”, which gets converted into data, and then signal, which is transmitted to the brain of the lion, in a form, which the lion is able to interpret and understand, in response to which, the lion thinks, a thought that is captured by a brain interface device picking up it’s brainwaves, converting it, translating into a human language you understand, converting it to voice sound, relayed back to you, which you hear in your head, whereby the lion’s response may sound to you perhaps like, “The same as I do every day when humans like you come by across that barrier. Hungry.”
Here there are a couple of points to note I would imagine that it will be very soon possible through brain implants (in the Rats Motor cortex) to give me partial (or full control) of a rats behaviour directly via my own brain. But this control will again all be done via the sub-personal computational processes we discussed above. With this type of interface I will not have any communication (at the personal level) with the rat.
What about direct interface so we can communicate linguistically via the interfaces with non-human animals? Such interfaces may not be as feasible as Shah seems to think. In a 2002 experiment Hauser and Fitch compared the grammar learning abilities of Tamarin monkeys and humans in learning grammars. Their studies showed that while Tamarin Monkey’s could learn context free grammars easily they could not learn phrase structure grammar, where as humans could learn both. Neuroscientist Angela D. Friederici decided to test this study from a neurological point of view. Since she did not have detailed neurological studies for cotton-top tamarins; so she instead she compared human and Macaque brains. She used a study by Petrides and Pandya (1994) which analysed the cytoarchitectonic structure of the frontal and prefrontal cortexes of the brain in human and macaque brains. Anterior to the central sulcus (cs) there is an area called BA 6. Friederici notes the following:
“BA 6 is particularly large in humans and in the monkey. However in the monkey the areas that seem relevant for language, at least in the human brain, are pretty small in the macaque” (The Brain Differentiates Grammars p. 186)
“For the group that learned the Finite State Grammar, we found activation in the frontal operculum, an area that is phylogenetically older than the Broca’s area, for the comparison between grammatically correct and incorrect sequences. Interestingly enough, difficulty cannot be an explanation here because behaviourally, no difference was found between the short sequences of the FSG and the long sequences, In the imaging data a difference was learned in the delay of activation peak and early peak, with an early peak for the short, and a late peak for the long FSG. But what do we find for the Phase Structure Grammar learning group? Here again, not surprisingly, the frontal operculum is active, but now additionally Broca’s area comes into play, And again when we compare the short sequences and the long sequences, difficulty does not matter.” (ibid p.187)
Now the fact that just the phylogenetically older areas of the brain were used for processing of Finite State Grammars (an artificially made grammatical rule), whereas in the phrase structure grammar ( the actual structure of human language) both the phylogenetically older area and the phylogenetically newer area are used is interesting. Because the areas involved in learning a Phrase Structure Grammar (the newer areas in the brain) are much smaller in the Macaque brain. This of course is why the Macaque’s cannot learn the Phrase Structure Grammar Rules but can learn the finite state rules. The macaque doesn’t have a sufficiently developed cortex.
Firstly I should note that the above experiment is one of many and it would take me well beyond the remit of this blog to go through all of the many experiments. Here I merely want to note these are the types of experiments we need to take note of (as well as mathematically modelling the verbal behaviour of human and non-human animals) if we want to understand the potential of human-non-human interfaces for communication. Shah doesn’t go into the important detail of what area of the human’s brain the animal is connected up to, and what area of the animal’s brain the human is connected up to. So one could ask are the implants placed in the pineal gland of both human and animal so they can communicate with each other via their respective Cartesian theatres? This joke of a question may sound a bit unfair on Shah, but it is important to understand exactly how he thinks the process will work before we can evaluate his claims.
Another mistake that Shah makes is in acting as though the animal and human will speak to each other in English (or some form of mentalese). Firstly the vast majority of animals will not have the ability construct grammatical sentences using a phase structure grammar in the way we do. So even if (and it’s a big if) they do think in propositional attitudes. The attitudes will be so different from ours that they will not think in ways that will be recognisably human. Furthermore we know from studies of animals that some animals are not social in the way we are. Very few animals recognise pointing as an attempt to share information about a shared object of experience. So with such interfaces the animal will be as powerless (because of his type of brain) to make the animal share information because this is not something this particular type of animals does.
Now one could reply to my objection that I am misrepresenting Shah because he is actually talking about animal thoughts being translated into English before they are communicated to us. But things are even worse in this case. If we translate and beat a lion’s thoughts into our grammatical form, and into terms we typically use (see Lakoff and Johnson for how much of our ordinary language is shot through with metaphors) we will be changing them to such a degree that we will not be justified in calling them an ordinary lion’s thoughts. Dennett makes this point forcibly:
“Wittgenstein once said, “if a lion could tlak, we could not understand him”. I think on the contrary, that if a lion could talk, that lion would have a mind so different from the general run of lion minds, that although we could understand him fine we could understand him just fine, we would learn little about ordinary lions from him” (Consciousness Explained p. 446)
So Shah’s translation claim doesn’t really help him make his case for the type of animal human interfaces he is arguing for.
An incredible amount of human thought is done unconsciously; we do not yet have enough data as to how much of animal behaviour that looks purposiveful actually involves conscious thought. Let alone that the animal will be prepared to communicate this knowledge with us. This is why passages like the one below are fanciful to say the least:
“And because it is a probable eventuality, it is valuable to forecast a scenario, where the technology becomes an actual reality. Imagine what could happen if we could use such technology to converse with animals, as we converse with each other in the present day. What would the subjects be? Perhaps spirituality, where we ask animals if they understand the concept of life and death, and if so, what it means to them? Do different animals believe in an afterlife? Do they believe in a heaven or a hell? Do they believe in karma or a greater entity? Do they worship in their own way?”
It is possible that we could have direct access to the images and memories of some animals by interfaces between their temporal lobe etc and our occipital lobe. However such memories will not necessarily have a propositional form, or they could have a propositional form imposed by us as our brain interacts with theirs. Furthermore our occipital lobe being a different character than theirs may modify the images etc to some degree. So instead of getting direct knowledge from the animal about its world we would be interpreting it from the point of view of our own mode of cognition.
One final difficulty I have with Shah’s behaviour is that he seems to think that such communication if possible would have ethical consequences:
Imagine if we could ask different species of animals, how they feel about different things? For example, if we could ask a chicken about its sentiments that humans take its eggs or its life, for food consumption? Only time would tell, when such communication is possible, if it’s response, if any, or whatever it is, would suffice to convert the entire human population to vegetarianism. What would different animals say to us, if we asked how they felt about humans in general? Would they say they think we are cute and adorable? Or would they say we frighten them? Or would they say, we look tasty?”
“Perhaps with a means to communicate across species barriers, with brainwave computing, we will able to negotiate with different animals. For example, we engage sharks to reach an understanding, where we would not hunt sharks for human consumption and sharks would not attack swimmers and beach-goers, more so, if a man goes overboard a ship’s deck, sharks would come to his aid. Perhaps we could discuss means of co-existing and cohabiting better, so as not to impede or violate each other’s spaces.”

I somehow don’t see these ethical consequences after all humans have the ability to communicate with each other and it hasn’t stopped them from exploiting each other in the least. Furthermore as I have said I see little chance of this direct communication he speaks of ever occurring. For the time being if we want to communicate with animals the best way is to understand their form of life and immerse ourselves into their world as they interact with each other. And to test your beliefs about their beliefs with as many experiments as possible.

Incomplete Nature

INCOMPLETE NATURE, EMERGENCE AND CONSCIOUSNESS
INTRODUCTION
Terrence Deacon’s 2013 book ‘Incomplete Nature: How Mind Emerged From Matter’ has been generally greeted with acclaim by the philosophical community (With some exceptions see Mcginn and Fodor’s reviews). His book which attempts a naturalistic account of how mind emerged from matter is challenging and extremely well argued. In this blog I will try to summarise and analyse some of the primary arguments in Deacon’s book.
His ‘Incomplete Nature’ begins with an extremely interesting analogy. Deacon speaks about how for many years the number zero was banned from mathematics. Mathematicians feared zero because of its strange paradoxical characteristics so they did not admit it into their mathematical systems. Deacon begins his book with the following quote:
“In the history of culture, the discovery of zero will always stand out as one of the greatest single achievements of the human race. (Tobias Dantzig)
Deacon argues that a similar situation now occurs for what he calls ententional phenomena. The word ‘ententional’ was coined by Deacon as a generic adjective to describe all phenomena that are intrinsically incomplete in the sense of being in relationship to, constituted by, or organized to achieve something non-intrinsic. This includes function, information, meaning, reference, representation, agency, purpose, sentience, and value (Incomplete Nature p.549). Deacon argued that ententional phenomena like the concept zero have a primary quality of absence. These absential features mean the relevant phenomena are determined with respect to an absence. So, for example our intentional states are directed towards some state of affairs which may or not obtain so it has an absential. These absential qualities are something that science up until now has found it impossible to give a causal account of. This is a serious state of affairs because something like my ‘belief that P’ can have obvious causal consequences, e.g. me moving my body to position x. Deacon correctly notes that ententional phenomena are treated by scientists in a similar way to the way mathematicians treated zero. Scientists and philosophers treat ententional phenomena as something spooky and paradoxical which need to be contained or outright eliminated from our ontology. Deacon proposes that admitting ententional phenomena into our ontology will greatly increase our theoretical powers in science in a similar way to the way admitting zero in to mathematics greatly increased our calculation capacities. So the aim of his book is to admit the reality of ententional phenomena and show how they arrived onto the scene naturalistically. He argues that his theory can explain how a form of causality dependent on absential qualities can exist; he claims that this explanation is compatible with our best science.
It is interesting to briefly compare Deacon’s view with the views of Ladyman et al as expressed in their ‘Everything Must Go’ which I discussed in my last blog. While Ladyman et al treat ententional phenomena as real patterns but patterns which do not exist in our fundamental ontology; Deacon treats ententional phenomena in a more realistic way. So, for example on p.38 he talks of the universe as being a causally closed system. (All basic causal laws of the universe also follow a closed system, all change comes from within). This seems radically at odds with Ladyman and Ross who argue that causality plays no role at the level of basic physics, though it does play a role in the special sciences. However a closer examination of Deacon’s views shows that his views on causation are actually pretty close to those of Ladyman et al.
Deacon outlines the thesis of his book as follows:
“As rhetorically ironic as this sounds, the thesis of this book is that the answer to the age-old riddle of teleology is not provided in terms of “nothing but…” or “something more…” but rather “something less…” this is the essence of what I am calling absentialism” (Incomplete Nature p.43)

When speaking of his explanations in term of absence he makes claims about processes and substances which are striking analogous to claims made by Ladyman et al:
“Showing how these apparently contradictory views can be reconciled requires that we rethink some very basic tacit assumptions about the nature of physical processes and relationships. It requires reframing the way we think about the physical world in thoroughly dynamical, that is to say, process, terms, and recasting our notions of causality in terms of something like the geometry of this dynamics, instead of thinking in terms of material objects in motion affected by contact and fields of force” (ibid p. 44)
He argues that while it is intuitive to think of the world in terms of billiard ball causality, when it comes to basic physics this notion has been abandoned in terms fields of probability rather than discretely localizable stuff. It is failure to overcome this billiard ball conception of stuff which leads to people to think that something must be added to make a mind.
He argues that our ultimate scientific challenge is to precisely characterize the geometry of dynamical from thermodynamic processes to living mental processes, and to explain their dependency relationships with respect each other. It is worth at this point considering Deacon’s criticisms of the metaphysical views of Jaegwon Kim.
DEACON ON EMERGENTISM AND KIM
Deacon argues that the reductionism which began with Democritus and produced many successful research programmes, for example, the reduction of chemistry to physics, is favoured by most contemporary theorists. This view has lead a lot of philosophers to think that smaller is more fundamental (philosopher Thomas Wilson dubbed this “smallism”). Deacon correctly notes that this view is not necessarily true:
“It is not obvious, however, that things do get simpler with a descent in scale, or that there is some ultimate smallest unit of matter, rather than merely a level of scale below which it is not possible to discern differences.” (Incomplete Nature p.153)
Nonetheless he points out that a lot of science has been successful as a result of thinking of objects in terms of their component parts. He claims that, despite the success that of thinking of objects as made of component parts has had in science; it has the difficulty that it focuses attention away from the contributions of interaction complexity. This, he complains, suggests that investigating the organisational features of things is less important than investigating its component properties.
When discussing emergence Deacon gives Sperry’s example of how consciousness is a product of certain configurations of matter which are only found in the brain. Though this is not a new type of matter the configuration can have certain causal consequences that other configurations of matter do not have. Sperry uses the analogy of how a wheel is a particular configuration of matter. This configuration does not involve new stuff but it does have causal consequences that are unexpected; i.e. the wheel can move in ways that other configurations of matter cannot. So Deacon thinks we can have emergentism without lapsing into any kind of mysterious dualism.
He notes that Jaegwon Kim is generally considered to have refuted certain forms of emergentist theories. He sums up Kim’s argument as follows:
“Assuming that we live in a world without magic, and that all composite entities like organisms are made of simpler components without residue, down to some ultimate elementary particles, and assuming that physical interactions ultimately require that these constituents and their causal powers (i.e. physical properties) are the necessary substrate for any physical interaction, then whatever causal properties we ascribe to higher-order composite entities must ultimately be realized by these most basic physical interactions. If this is true, then to claim that the cause of some state or event arises at an emergent higher level is redundant. If all higher-order causal interactions are between objects constituted by relationships among these ultimate building blocks of matter, then assigning causal power to various higher-order relations is to do bookkeeping. It’s all just quarks and gluons- or pick your favourite smallest unit- and everything else is just a gloss or descriptive simplification of what goes on at that level. As Jerry Fodor describes it, Kim’s challenge to emergentists is: why is there anything except physics?” (Terrance Deacon ‘Incomplete Nature’ 2013 p.165)
Deacon claims that Kim’s challenge can be attacked from the point of view of contemporary physics. He argues that the substance metaphysics which Kim uses to support his mereological analysis is not supported by quantum physics. Deacon’s point is similar to a point made by Ladyman et. al in their ‘Everything Must Go’, while Tim Maudlin has made similar points in his ‘The Metaphysics Within Physics’. The problem which all those theorists note is that there are not any ultimate particles or simple ‘atoms’ devoid of lower level compositional organisation on which to ground unambiguous higher-level distinctions of causal power (ibid p.167).
On page 31 of their ‘Every Thing Must Go’ Ladyman et al. also attack Jagewon Kim’s 1998 book ‘Mind and World’ for claiming to be a defence of physicalism despite the fact that it doesn’t engage with any contemporary physics. They note that there are no physics papers or physics books even cited by Kim in his book. This is besides the fact that Kim’s arguments rely on un-trivial assumptions about how the physical world works (ETMG p.31). According to Ladyman et al. Kim defines a ‘micro-based property’ which involves the property of being ‘decomposable into non-overlapping proper parts’ (Kim 1998 p. 84 (quote taken from ETMG). This assumption does a lot of Kim’s work in defending his physicalism, however as we know from Quantum Entanglement, micro-components of reality are not decomposable in this way. They explicate their criticism of Kim as follows:
“Kim’s micro-based properties, completely decomposable into non-overlapping proper parts, and Lewis’s ‘intrinsic properties of points-or of any point sized occupants of points’, both fall foul of the non-separability of quantum states, something that has been a well-established part of micro-physics for generations” (ETMG p. 32)
Ladyman et al are attacking Kim’s metaphysics because it is done without regard to our best fundamental physics. As we can see their criticism of Kim is entirely in agreement with Deacon’s criticism. Deacon, however is not merely criticising Kim for ignoring contemporary quantum physics in metaphysics. He is making the further claim that contemporary quantum physics shows that Kim’s arguments against reductionism do not work.
Deacon correctly argues that in quantum physics when we get to the lowest level we have quantum fields which are not dividable into point particles. He notes that Quantum fields have ambiguous spatio-temporal origins, have extended properties that are only statistical and dynamically definable, etc. So given the fact that quantum world behaves in this way Kim’s analysis into mereological (part/whole) features fails
Kim’s in effect argued that since at bottom it is all just quarks and gluons, and that everything else is just a descriptive simplification of what goes on at that level, then we are forced to ask, why is there anything other than physics? Deacon notes that the emergentism which Kim is criticising is connected to the idea of supervenience which argues that “There cannot be two events exactly alike in all physical respects but differing in some mental respects, or that an object cannot alter in some mental respects without altering in some physical respect” (Davidson 1970). People who argue for emergence, have to account for how something can emerge which is entirely dependent on the physical; but is not reducible to it. Kim thinks that his argument shows that this type of emergence is not possible.
Kim in effect argues that since there can be no differences in the whole without differences in the parts then emergence theories cannot be true. Kim claims that in our theory we want to avoid double-counting. So if we map all causal powers to distinctive non-overlapping parts of things this leaves no room to find them uniquely emergent, in aggregates of these parts, no matter how they are organised (Incomplete Nature p.167)
Now Deacon replied to this by criticising Kim for not keeping up with developments in quantum mechanics. Philosopher Paul So made the following reply to Deacon: “What I do remember is that Kim’s problem with Emergentism is that emergent phenomena like mental states may not have causal powers. Specifically, If mental states supervene (or ontologically depend) on physical ones, then its really physical ones that do all the causal work, whereas mental states just appear to have causal powers. I agree that Kim may need to learn more quantum mechanics to substantiate his claim, but I don’t think Deacon’s objection really refutes his primary concern; I don’t think Kim’s assumption about particles being fundamental constituents of the physical world is essential to his concern. What is essential to his concern is how an emergent phenomena like mental states can do any causal work if its really the phenomena from fundamental physics that do all the work. We could just replace particles with quantum fields and his concern could still stand. If emergent phenomena such as our mental states ontologically depend on quantum fields, then it appears that quantum fields do all the causal work (though in a statistical manner).” (Paul So personal communication)
Now I should first note that Deacon’s concern is with the fact that Kim’s argument is dependent on a part-whole analysis, and this part/whole analysis is simply impossible at the quantum level. It is only at the macro-level that we can do the type of mereological analysis that Kim suggests. Deacon discusses the philosopher Mark Bickhard who argues against Kim:
“A straight forward framing of this challenge to a mereological conception of emergence is provided by cognitive scientist and philosopher Mark Bickhard. His response to this critique of emergence is that the substance metaphysics assumption requires that at the base, particles participate in organisation but they do not themselves have organisation. But, he argues, point particles without organisation do not exist because real particles are the somewhat indeterminate loci of inherently oscillatory quantum fields. These are irreducibly process like and thus by definition organised. But if process organization is the irreducible source of the causal properties at this level, then it cannot be delegitimated as a potential locus of causal power without eliminating causality from the world. It follows that if the organisation of a process is the fundamental source of its causal power, then fundamental reorganisations of process, at whatever level this occurs, should be associated with a reorganisation of causal power as well” (ibid p.168)
The above quote shows why Paul’s objection doesn’t refute Deacon’s argument. However, it is worth noting that simply because, Kim hasn’t refuted Deacon this doesn’t vindicate emergentism; rather he merely shows that a particular argument does not work. He will need a much more extensive discussion of the philosophers who reject emergence if his positive thesis is to be sustained.
DEACON ON AUTOGENS
“The reciprocal complementarity of these two self-organizing processes creates the potential for self-repair, self-reconstitution, and even self-replication in a minimal form” (Incomplete Nature p. 306)
So what we are looking at here is a co-facilitation of morphodynamic processes. By an autogen Deacon means the whole class of minimally related teleodynamic systems. Something is an autogen if it is a simple dynamical system that achieves self-generation by harnessing the co-dependent reciprocity of component morphodynamic processes. He notes that though considered in isolation or co-dependent, together they are reciprocally self-limiting, so that their self-undermining features are reciprocally counteracted. As a result an autogenic system will establish its capacity to re-form before exhausting its substrates, so long as closure is completed before reaching this point (ibid p.308). The two processes provide boundary conditions for both providing a supporting environment for each other.
Deacon also argues that an autogen can also reproduce. He argues that fractured components of a disrupted autogen will be able to create new autogens by the same process with which the autogen was created. In trying to think through how the first life arose from non-life Deacon begins with the notion of Autocatalysis. Typically a closed molecular system will tend toward some steady state, with fewer and fewer chemical reactions occurring over time, as the overall distribution of reactions runs in directions that offset each-other – this is the second law of thermodynamics at work (ibid p. 293). Deacon notes that with chemical systems maintained far from equilibrium, where the conditions for asymmetric reaction probabilities are not reduced, non-equilibrium dynamics can produce some striking features (ibid p.293). Deacon claims that the most relevant classes of non-equilibrium chemical process is autocatalysis.
A Catalysis is a molecule that because of its allosteric geometry and energetic characteristics increases the probability of some other chemical reaction taking place without itself being altered in the process. (ibid p. 293), hence it introduces a thermodynamic element into a chemical reaction as a consequence of its shape with respect to other molecules.
An Autocatalysis is a special case of catalytic reactions in which a small set of catalysts each augment the production of another member of the set, so that ultimately all members of the set are produced. This has the effect of creating a runaway increase of the molecules of the autocatalytic set at the expense of other molecular forms, until all substrates are exhausted. Autocatalysis is thus briefly a self-amplifying chemical process that proceeds at ever-higher rates, producing more of the same catalysts with every iteration of the reaction. Deacon notes that according to some people autocatalysts are extremely rare (though Stewart Kauffman has argued that they are not a rare as believed). In fact Kauffman has argued that Autocatalysis is inevitable under not too extreme conditions. Manfred Eigen has studied hyper-cycles where autocatalysts lead to more autocatalysts. Deacon notes that obviously for such autocatalysts to occur we need a rich substrate to keep things going. When the raw materials are used up the interdependent catalysts dissipates. So he argues Autocatalysis is SELF-PROMOTING, but not SELF-REGULATING or SELF MAINTAINING. These catalytic networks of molecular interactions characterize the metabolic processes of living cells, so it is far from impossible.
Deacon begins his discussion of evolution by quoting from Batten, et al’s 2009 paper ‘Visions of Evolution: Self Organisation Proposes What Natural Selection Disposes.’. Deacon argues that while a lot of evolutionary theory tells the story of random mutations and natural selection being the primary story in evolution, there is growing evidence that self organisation plays as big a role as random mutations. Deacon correctly argues that strictly speaking the theory of evolution is substrate neutral (hence a precursor to functionalism), so it doesn’t really matter to selection whether it is working on a change as a result of self-organizing process or a change as a result of random mutation.
He argues that this substrate neutrality has clear implications for emergence. If like Dennett we think of evolution as an algorithmic process that can be implemented on various different machines we can the relevance of it for emergence. When a particular functional organisation is selected over thousands upon thousands of years the same physical process is not necessarily used all of the time. He likens this the way that 2+2=4 can be calculated using different devices, a human brain, a calculator, a page, fingers, etc. Likewise an adaptation is not identical to the collection of properties constituting the specific mechanism that constitutes it (ibid p.425). In some ways it is less than the collection of properties, as only some of the properties are relevant to the success of an adaptation. While in some ways an adaptation is more than its properties, as it is the consequence of an extended history of constraints being passed from generation to generation (constraints that are the product of many different substrates) (ibid. p.425)
Later Deacon describes Evolution as follows:
“Evolution in this sense can be thought of as a process of capturing, taming and integrating diverse morphodynamic processes for the sake of their collective preservation.” (ibid p. 427)
He gives some clear examples of morphodynamic processes:
(1) Whirlpools, (2) Convection Cells, (3) Snow Crystal growth (4) Fibonacci structures in inorganic matter.
And he argues that the above morphodynamic processes are interesting because:
“…What makes all these processes notable, and motivates the prefix morpho- (form), is that they are processes that generate regularity not in response to the extrinsic imposition of regularity, or by being shaped by any template structure, but rather by virtue of regularities that are amplified internally via interaction dynamics alone under the influence of persistent external perturbations.” (ibid p.242)
When discussing Autogens Deacon does seem to be invoking a kind of intrinsic intentionality when cashing out what he believes are the earliest forms of information (in his sense) using Autogens. Deacon argues that early autogens had information (in the sense of aboutness) because a particular form of chemical bond was sensitive to what environment was good for it. This is a strange argument and is worth looking at closely.
“They argue that an autogeneic system in which its containment is made more fragile by the bonding of relevant substrate molecules to its surface could be considered to respond selectively to its environment” (ibid p.443)
Here the use of the word information (as in aboutness) is extremely strange. In what sense could the bonding of substrate molecules to the surface of an autogen convey information? Deacon notes that this bonding (or lack thereof ) will have consequences for whether particular autogens survive in particular environments.
So, for example, if the autogen’s containment is disrupted in the context where the substrates that support autocatalysis are absent or of low concentration, re-enclosure will be unlikely. So stability of containment is advantageous for persistence of a given variant in contexts where the presence of relevant substrates is of low probability. He correctly notes that this is an adaptation. (So different autogens are selected by different environments). While these autogens cannot initiate their own reproduction (their differential susceptibility to disruption with respect to relevant context is a step in that direction) (ibid p.443)
“It seems to me that at this stage we have introduced an unambiguous form of information and its interpretative basis. Binding of relevant substrate molecules is information about the suitability of the environment for successful replication; and since successful replications increases the probability that a given autogenic form will persist, compared to other variants with less success at replication, we are justified in describing this as information about the environment for the maintenance of this interpretative capacity.” (ibid p. 443)
I suppose one could say that it is indeed information in Bateson’s sense (a difference that makes a difference) but there is nothing at that point that HAS the information, we see the information as theorists, but the information is not grasped by anything. We could view the information as a real-pattern that exists whether or not there is anyone around to observe it.
Deacon then asks the obvious question: what is the difference between the information which the autogen has, and the information of say a thermostat? He says that in the absence of a human designer a thermostat only has Shannon Information. His argument is that while things like thermostats provide information about particular aspects of their environments this is not a significant as it may seem. A wet towel can provide information about the temperature of a room, or the tracking of dirt in from the street has the capacity to indicate that somebody has entered the room. He notes that while things like wet towels can provide information as to the temperature of the room, there an infinite number of other things that the wet towel is informative about depending on the interpretative process used. He argues that that is the KEY DIFFERENCE. What information the physical process provides depends on the idiosyncratic needs and interests of the interpreter. So, he claims, that there is no INTRINSIC END DIRECTEDNESS to these mechanisms or physical processes.
He makes the startling claim that this INTRINSIC END DIRECTEDNESS is provided by autogens tendency to reconstitute itself after being disrupted. This seems like the dreaded teleology being introduced for even simple non-living autogens. His argument involves noting that the autogen is tending towards a specific target state, it also tends to develop towards a state that reconstitutes and replicates this tendency as well. This tendency has normative consequences, autogens can receive misinformation (bind with other molecules that are not catalytic substrates and will weaken it). Such autogens lineages will tend to be wiped out. So where there is information (non-shannonian) there is the possibility of error (because aboutness is an extrinsic relationship and so is necessarily fallible).
I am not sure what to make of his claims here to be honest. For me Dennett put paid to the idea of intrinsic aboutness as far back as his (1987) ‘Error, Evolution, and Intentionality’ where he attacks Fodor’s notion of intrinsic intentionality. It seems to me that his arguments go through against Deacon without modification. If Deacon is arguing that the normativity of these autogens is a ‘real-pattern’ I would have no problem but I am having trouble understanding his idea of INTRINSIC END DIRECTEDNESS.
Deacon not only speaks of autogens having relations of aboutness to the world, he goes further and argues that autogens have a type of consciousness. He even argues that individual neurons in our brain are conscious:
“The central claim of this analysis is that sentience is a typical emergent attribute of any teleodynamic system… A neuron is a single cell, and simpler in many ways than almost any other single-cell eukaryotic organisms, such as an amoeba. But despite its dependence on being situated within a body and within a brain, and having its metabolism constantly tweaked by signals impinging on it from hundreds of other neurons, in terms of the broad definition of sentience I have described above, neurons are sentient agents” (Ibid pp.509)
This claim of course means that for Deacon an autogen is conscious. Now in above we noted that for Deacon an autogen has intrinsic end directedness because there are aspects of its environment that are good for its survival and reproduction. Deacon has here offered an interesting proposal as to how consciousness and aboutness emerges that manages to avoid panpsychism, or the exceptionalism which claims that only humans are consciousness and it magically emerges for us and us alone, or eliminativism which says that experience does not exist. However, I don’t think he has really offered any evidence to support his theory over the theories of his rivals. Though it is certainly an interesting proposal that if researched further could lead to answers.
CONSCIOUSNESS AND PAIN
As Deacon’s key exemplar of consciousness he focuses on Pain and Suffering. I think this is an interesting and useful starting point. In Dennett 1978 ‘Why you can’t make a computer that feels pain’, he tried to discuss whether it would be possible to build a computer that feels pain. Dennett’s conclusion was that you couldn’t build a computer that feels pain because the ordinary language conception of pain is radically incoherent. So if we try to model pain (as a coherent phenomena) we will have to radically modify the ordinary language concept, once this concept is so modified that we can model it in a computer, it will not share all of the features that we typically associate with pain. So it is no good a theorist saying that a computer does not experience pain because intuitively I think that pain must have properties x and y, because we know that people’s intuitions on the nature of pain form a radically incoherent set. Most people upon reading Dennett respond that an ESSENTIAL feature of pain is its intrinsic horrible feature. So since there is no evidence that computers feel pain in this ESSENTIAL sense then we cannot build a computer that feels the essential characteristic of pain its awfulness. Now Dennett could reply to this by arguing that pain has no essential features. He could point to congenital asymbolia where people feel pain but do not mind the pain (hence don’t find it intrinsically awful), to show that pain does not have an essential feature of awfulness. Likewise he could point to the fact that people who have congenital analgesia do not experience pain. Now what is interesting about this disorder is that people who suffer from it typically injure themselves at night because they do not adjust their body when asleep. Now people without the disorder typically adjust their bodies if in an uncomfortable position (even while asleep), which indicates that we can feel pain even while asleep. It is hard to know what to make of these claims but Dennett notes that the fact that we can have unconscious pains, and pain which doesn’t feel awful indicates that we should not comfortably speak of essential features of pain. Hence he thinks we should not let our intuitions of the supposedly essential features of pain being un-modelable in a computer too seriously. Deacon agrees with some of Dennett’s opponents that we cannot have a computer that feels pain. But Deacon’s reasons are not mere reliance on ones intuitions about the nature of pain. Deacon describes emotion as follows:
“In the previous section, we identified the experience of emotion with the tension and work associated with employing metabolic means to modify morphodynamics. It was noted that particularly in cases of life-and-death contexts, morphodynamic change must be instituted rapidly and extensively, and this requires extensive work at both the homeodynamic (metabolic) and morphodynamic (mental) levels. The extent of this work and the intensity of the tension created by the resistance of dynamical processes to rapid change, is I submit experienced as the intensity of emotion” (Incomplete Nature p. 527)
So for Deacon for a machine to experience emotion it needs work (the tension associated with fighting against the 2nd law of thermodynamics), so a machine in the sense of Dennett 1978 (which is just a description of a physical regularity); cannot experience emotion.
Of course Daniel Dennett today may has moved closer to Deacon’s views, as can be seen in his talk ‘If Brains are Computers what Kind of Computer are they?’ http://youtu.be/OlRHd-r2LOw . Dennett’s talk of selfish neurons forming a collation with each other is very close to Deacon’s views on the nature of neuronal activation.
Throughout the book Deacon makes claims about the nature of possibility that need to be thought through very carefully as it is hard to square with physics, biology, or anything else; is he arguing for a possible world type scenario? Does his view commit him to modal realism a la Lewis? Or is it the type of possible world proposed by Kripke? In ETMG Ladyman et al discuss modal realism one wonders whether Deacon’s (who is a naturalist) is defending a similar type of modal realism as them. Unfortunately though one of the central ideas in the book is that what doesn’t happen, or could have happened but didn’t effects the physical world, Deacon didn’t really explicate what he means by possibility. So I need to think through what the metaphysics of possibility implied by his thesis really amounts to. Thinking through and comparing Williamson’s views on the nature of possibility as sketched in his new book ‘Modality as Metaphysics’, with Ladyman et al’s OSR and Deacon’s Incomplete Nature may be helpful.
In this blog I have merely described some of Deacon’s main arguments to defend his position on how mind emerged from matter, and gestured towards some strengths and weakness in his view. In my next blog I will discuss his conception of information, and modality in detail and compare his views with those of Ladyman et al.
I think that his book is a useful starting point in helping us think through these complex issues. Though aside from the need to factor thermodynamics into our computational theories of how the mind and life work, I think that he didn’t provide sufficient evidence to support his claims of Autogens having end-directedness or sentience. That said he book represents an interesting attempt to deal with some very old and intractable problems.

Anomalous Monism and Naturalism

ANOMALOUS MONISM AND NATURALISM
“I acquiesce in what Davidson calls anomalous monism, also known as token physicalism: there is no mental substance, but there are irreducibly mental ways of grouping physical states and events…Still the mentalistic predicates, for all their vagueness, have long interacted with one another, engendering age-old strategies for predicting and explaining human action. They complement natural science in their incommensurable way, and are indispensible both to the social sciences and to our every day dealings. Read Dennett and Davidson” (Quine: Pursuit of Truth 1990 pp72-73)
“The notion of an exhaustive class of states each of which qualifies as thinking about Fermat’s Last Theorem, and each of which is specifiable in purely physiological terms, seems discouragingly unrealistic even if restricted to a single thinker. It is at this point that we must acquiesce in the psychophysical dualism of predicates, though clinging to our effortless monism of substance. It is what Davidson has called anomalous monism. Each occurrence of a mental state is still, we insist an occurrence of a physical state of a body, but the groupings of these occurrences under mentalistic predicates are largely untranslatable into physiological terms. There is token identity, to give it the jargon, but type diversity…The general predicate ‘thinking about Fermat’s Last Theorem’, then, is irreducibly mentalistic. It still denotes various physical objects in its intermittent way (usually mathematicians), and it has its place in our meaningful physicalistic language. The point of anomalous monism is just that mentalistic predicate imposes on bodily states and events a grouping that cannot be defined in the special vocabulary of physiologically describable, we presume, given all pertinent information. (Quine: From Stimulus to Science 1995 pp87-88)
Chapter 6 of Quine’s classic ‘Word and Object’ (1960) book contains a section called ‘The Double Standard’ where he discusses Brentano’s claim that propositional attitude ascriptions cannot be reduced to purely physical (extensional) science. Quine as a naturalist obviously chose the course of arguing that if propositional attitude ascriptions are not reducible to extensional physics we must reject them from our physics. However he recognised that descriptions of people’s behaviour in terms of intentional locutions were indispensible in ordinary life. Furthermore he recognised that despite the irreducibility of intentional locutions they had real explanatory value. Hence he grudgingly adopted what he called the double standard where we have to adopt intentional ascription language in ordinary life while eschewing it for scientific purposes. As can be seen from the above quote he even adopted a anomalous monism near the end of his philosophical career.
In the fifty five years since ‘Word and Object’ philosophers influenced by Quine have tried to deal with the propositional attitudes in various different ways. The Churchland’s adopted an approach which argued that future science would do away with propositional attitude ascriptions altogether and all human behaviour would be explained in the language of neuroscience. Fodor defended the reality of propositional attitude ascriptions. While Dennett argued that propositional attitude ascriptions were real patterns which picked out information which would be missed if we adopted a purely physical level explanation, but that at base intentions were not to be found at the ultimate physical level. Davidson’s solution was to adopt anomalous monism; Quine as we can see from the above quotes, believed that Davidson’s solution was the correct one. In this blog I will argue that if one is a naturalist like Quine then recent developments show that from a naturalistic perspective (and there is no other legitimate perspective) anomalous monism is false.
NATURALISM AND ANONOMALOUS MONISM
In their ‘Everything Must Go’ (2006) Ladyman et al criticise both Davidson (and Fodor) who wrote that physics discovers causal laws that take the form of exceptionless generalizations relating atomic events (ETMG p.35). Ladyman et al take exception to this claim as, according to them, if one looks at actual physical laws one will see that these laws have numerous counter examples and are the laws physicists produce are functionally interdependent not generalisations (ibid. p.35). They hold Davidson’s 1970 paper ‘Mental Events’ responsible for the fact that a substantial proportion of contemporary philosophers think that biological, mental, and economic properties supervene on physical properties (some of these people think of supervenience as part/whole physicalism which contradicts Primacy of Physics Constraint) (ibid p.55).
Both Davidson (1970) and Fodor (1974) were arguing against Type Reductionism. Ladyman et al note that the title of Fodor’s paper ‘Special Sciences or The Disunity of Science as a Working Hypothesis’ indicates that he was attacking Putnam and Oppenheim’s (1958) paper ‘Unity of Science as a Working Hypothesis’. Ladyman et al note that Fodor argues against bridge laws which played no role in Putnam and Oppenheim’s original paper. They also criticize Fodor for not dealing with the many examples which Putnam and Oppenheim use in their paper; they chastise Fodor for dealing with toy models of physics opposed to real physics. This approach has become more and more common over the last forty years to the detriment of metaphysics (ibid. p. 63).
Fodor’s two main arguments against the unity of science are that the generalisations in physics usually contribute nothing to generalisations in the special sciences. Ladyman et al argue that Fodor massively overstates his case here. Fodor’s second argument is his multiple realization argument. They agree that the multiple realisation argument is a good argument against type reductionism; but they do not think that it supports non-reductive physicalism. Ladyman et al. argue against supervenience as follows:
“Hence, unlike external relations, the non-supervenient relations into which several quantum particles may enter are not even supervenient on the relational properties which there relata possess independently of each other. They are much more independent of the properties of the individual particles than spatio-temporal relations between classic objects. This would seem to refute supervenience in so far as the doctrine is supposed to be inspired by science as Lewis claims…We have argued that entanglement as described by quantum mechanics teaches us that Humean supervenience is false, and that all the properties of fundamental physics seem to be extrinsic to individual objects” (ibid pp 163-164)
The above claim obviously has serious implications for Davidson’s Anomalous Monism. They argue as follows:
“For now the key point is that commitment to a world of levels strictly composed out of deep down little things has played an essential role in leading neo-scholastic metaphysicians to cast doubt on the ontological seriousness of all the special sciences. This is very far reaching anti-naturalism” (ibid p. 206)
Here we can see that if philosophers take contemporary physics seriously (as they must) then the idea that the mental supervenes on the physical cannot stand, as the supervenience relation itself is in question. In order to see how this claim effects Davidson’s Anomalous Monism I will outline his arguments for it and then show where those arguments fail.
ARGUMENTS FOR ANONALOUS MONISM
In his ‘Psychology as Philosophy’ when arguing that there can be no psychophysical laws Davidson argues as follows:
(1) He notes the holistic character of the cognitive field:
“Any effort at increasing the accuracy and power of a theory of behaviour forces us to bring more and more of the whole system of the agent’s beliefs and motives directly into account. But in inferring this system from the evidence, we necessarily impose conditions of coherence, rationality, and consistency. These conditions have no echo in physical theory, which is why we can look for no more than rough correlations between psychological and physical phenomena.” (Davidson: Psychology as Philosophy p. 25)
(2) He also notes the constraints of Rationality in explaining a person’s behaviour intentionally. He doubts that causal explanations of the type (supposedly) used in physics can capture the facts of rational decision like deciding between conflicting reasons and picking the best course of action.
(3) Davidson argues that we have no serious laws of the kind: Whenever a man has such and such beliefs and desires, and such-and-such further conditions are satisfied, he will act in such a way (ibid p.25). He argues that there are no serious laws of this kind, rather all we have are mere statistical generalisations. And he notes that the probabilistic laws in physics are nothing like the statistical generalisations in intentional explanation because physical laws give sharply fixed probabilities which, which spring from the nature of the theory. If we want to give an explanation of a person’s behaviour in terms of beliefs and desires what we want are, he argues, a quantitative calculus that brings all relevant beliefs and desires into the picture. He believes that such calculus is impossible. (It is worth noting that he tries to construct such a calculus in his ‘Unified Theory of Thought and Action’ with by his own admission mixed results)
(4) Davidson worked experimentally in the sixties on Decision Theory as developed by Frank Ramsey (Truth and Probability 1926) for several years. He notes that this form of Decision Theory (Basically Game Theory) is the best way to discover the nature of beliefs and desires in terms of actions:
“Ramsey’s theory suggests an experimental procedure for disengaging the roles of subjective probability (or degree of belief) and subjective value in choice of behaviour. Clearly, if it may be assumed that an agent judges probabilities in accord with frequencies or so-called objective probabilities, it is easy to compute from his choices amongst values what his choices are; and similarly one can compute his degree of belief in various propositions if one can assume that his values are, say linear money” (Ibid p. 26)
So Davidson assumes that to some degree we can construct a scientific account of the propositional attitudes in terms of Decision Theory. However he doesn’t think that this decision theory is on a par scientifically with physical theories. This is because he thinks that Ramsey’s theory has no predictive power unless it is assumed that people’s beliefs and values do not change over time. He noted that when studied experimentally it can be seen that the testing procedure effects the pattern we are trying to study. Merely making choices (without reward or feedback) alters future choices. The choices tend to become more and more consistent. This result, he claims, shows that Ramsey’s model will not be useful in making accurate predictions. All of this is connected to the fact that for Davidson: Attributing a belief a man cannot be done one at a time. We must attribute many beliefs to a man at the same time. And of course a man’s beliefs are connected to his desires, to his whole background theory of the world. When we are trying to interpret others we must assume that they hold a mostly true theory of the world and that their beliefs are connected to each other in ways that are largely consistent, and that theories are modified to accommodate the consistency criterion. So, for example, if a person becomes aware that he holds a contradictory set of beliefs he will modify his beliefs to avoid this problem.
In his 1991 book ‘Donald Davidson’ Simon Evnine noted a problem with construing this rationality constraint in terms of physics (in this case neuroscience):
“Let us look in detail at a case where the absence of an echo in physical theory of normative principles means that we cannot have a law linking some mental predicate with some physical predicate. If p is the proposition that there are at least ten apples in my bag, and q is the proposition that there are at least 5 apples in my bag, then, since p entails q, normative principles tell us that if someone believes that p, he should not believe that not q. Now suppose that there were psychophysical laws which connected the belief p with neural state m, and the belief that not-q with neural state n. These bridge laws ought to enable us to infer, from the fact that if someone believes that p, then he should not believe that not q, that if someone is in neural state m, he should not be in neural state n, But how are to make sense of this ‘should’ in the context of a physical law relating two distinct neural states?” (Evnine p. 19)
Evnine is surely correct that our brain states ALONE do not of them tell us that we should or should not avoid contradictions. However one can be pretty sure that evolution built our brains in such a way as to respect things like the law of non-contradiction respecting it would be vital to our survival . As Quine correctly noted creatures invariably wrong in their inductions have the pathetic but praiseworthy characteristic of dying before reproducing their own kind. Things are even worse for creatures who are incapable of reasoning deductively.
Davidson builds his argument for Anomalous Monism on three premises:
(1) Mental Events are causally related to Physical Events.
(2) Singular Causal relations are backed by Strict Laws.
(3) There are no strict psychophysical laws.
The above set of premises seem on the face of it to be inconsistent because if mental events are causally related to physical events, and singular causal relations are indeed backed by strict laws then it would seem that contra premise 3 we can indeed have strict psychophysical laws. Davidson argues that contrary to appearances the above premises are consistent and he tried to show why this is the case in his 1970 paper Mental Events. As an example of a mental event being causally related to a physical event he notes:
“If someone sunk the Bismarck, then various mental events such as perceivings, notings, calculations, judgements, decisions, intentional actions, and changes of belief played a causal role in the sinking of the Bismarck. I would urge that the fact that someone sank the Bismarck entails that he moved his body in ways that was caused by mental events of certain sorts, and his bodily movement in turn caused the Bismarck to sink ” (Davidson: Mental Events p. 208)
So we can see from the above that Davidson is talking about mental events he is referring to intentional actions as opposed to conscious states. So Davidson’s first premise amounts to the claim that intentional actions causally interact with non-intentional events. He holds that the causal interaction can go the other way. So, for example, a ship driving towards us can cause us to perceive a ship. In this sense Davidson argues that physical events can cause mental events and vice versa. Given this case one could be forgiven for wondering why he denies that there can be psychophysical laws?
Firstly he argues that mental events are identical to physical events: by an event, he means an unrepeatable, dated individual e.g. The event of the twin towers being bombed (see Stephen Pinker ‘The Stuff of Thought’ for difficulties in individuating events). His Anomalous Monism is the position that the mental supervenes on the physical; so mental events depend of physical events, it entails that two events cannot be alike in all physical respects but differ in mental respects, and that an object cannot alter in some mental respect without altering in some physical respect (ibid p. 214). However Davidson argues that despite the proceeding facts this does not mean that we can reduce the psychological to the physical anymore than we can reduce moral properties to descriptive properties. The obvious claim being that morality is prescriptive (normative) and we cannot derive an ought from an is, while intentional descriptions are also irreducibly normative. When we judge that a creature is an intentional system we are attributing a minimal level of rationality to them. We are saying that they can judge that something is the case.
There are real weak points to all of Davidson’s premises however I think that premise 3 is the weakest. He argued that there are no psychophysical laws. There is a degree of truth to the claim but this is because when Davidson talks about the psychological he is talking about our standard folk-psychology. Now our folk-psychology like our folk-physics and our folk-biology is the result of our idiosyncratic evolutionary history and our shared cultural heritage. This folk-psychology, folk-physics etc is pragmatically useful but it is also riddled with inconsistencies and is totally ill-equipped to be translated into the language of scientific laws. But as Chomsky correctly argues this doesn’t make the mental anomalous anymore than it makes the physical anomalous, rather it just shows the folly in trying to reduce idiosyncratic folk theories to scientific theories:
“The argument does not seem entirely compelling. For the same reason we should also not compare truisms about balls rolling down hills or a storm brewing in the West with the law of falling bodies, but we are not concerned with the lack of “physico-physical laws” connecting ordinary discourse about events in the world and explanatory theories of nature…In so far as scientific inquiry might undermine one’s conviction that the Sun is setting or that objects are impenetrable, it seems that it might in principle have similar effects on one’s convictions about the nature of beliefs…Folk Mechanics seems no more susceptible than folk psychology to the formulation of bridge laws.” (Chomsky ‘New Horizons in The Study of Language and Mind p. 89)
If Davidson wants us to take his third premise seriously he needs to present us with further arguments which shows why we should take a lack of psycho-physical laws any more seriously than a lack of physico-physical laws.
Another objection to Davidson’s premise 3 is that his version of a law relies heavily on a conception of causation that is seriously at odds with anything to be found in science. Davidson gives the following examples of one event causing another:
(1)The eruption of Vesuvius in 79 A.D. caused the destruction of Pompeii.
(2)His lighting the match caused the explosion.
(3)The next California earthquake will cause the destruction of the Golden Gate Bridge.
(4)The hurricane is causing the rise in the water level.
Davidson argues that singular causal statements like the above ones indicate the existence of laws that cover the cases. He then goes on to claim that any justification for the claim that singular causal events are covered by laws will be a-priori. In his explication he notes that singular causal statements are extensional in that their truth value is to remain invariant independent of substitution of co-referring names. So, for example, if (5) ‘Batman caused the Joker to Cry’ is true then (6) ‘Bruce Wayne caused the Joker to Cry’ must be true as well because ‘Batman’ and ‘Bruce Wayne’ refer to the same person. While by a law he means a true universally quantified statement. Strict laws do not admit of ceteris paribus clauses; though less strict laws from the sciences like biology, psychology, etc will have these ceteris paribus clauses.
We have seen already that Davidson argues for his anomalous monism by claiming that intentional statements which are irreducibly normative cannot be reduced to physical statements which form a closed system. So he argues that while mental states supervene on physical states they cannot be reduced to them. (This seems to be connected to the fact that laws in physics are unexceptionable and support counterfactuals whereas laws in the special sciences employ ceteris paribus clauses).
Some people have criticised Davidson by arguing that his premise that true singular causal statements are covered by laws is shown to be false by contemporary quantum physics; in particular the fact that it is not deterministic. Davidson however denies this charge. He argues that the laws that he says exist are perfectly compatible with probabilistic laws. This is because such laws are universal and exceptionless (the probabilities they predict have no exceptions) (Davidson: Laws and Cause p. 205). Davidson makes the following point:
“Since it allows probabilistic laws, the cause-law thesis does not (in one fairly standard sense of that messy concept) imply determinism. Neither, then, does it imply complete predictability, even in principle, nor retrodictability.” (ibid p.205)
So we can see that Davidson at least does not see his Law-Cause thesis as being in conflict with quantum mechanics.
CAUSATION AND PHYSICS
As we saw above Ladyman et al. are very critical of Davidson’s views as expressed in his paper ‘Mental Events’ (1970) where Davidson argues for his Anomalous Monism. In order to understand their critique I will briefly describe the nature of their scientistic project. In their (2006) book ‘Everything Must Go’ they critiqued analytic metaphysics which relies heavily on our intuitions to construct our metaphysical theory of the world. A lot of their criticisms were directed to philosophers who claim to be naturalistic but who build their metaphysics upon toy models which while intuitively plausible do not correspond with the world revealed by quantum mechanics. This reliance on intuitions they think is a bad thing because our intuitions are not designed to handle really abstract thought about things like quantum mechanics:
“Furthermore, as science progresses we adjust our ontology in accordance with our concern for ontological parsimony. However, practical folk have no systematic reason to be interested in the constraint. Nor could natural selection attend to it when it designed the native anticipatory apparatus used by practical folk. In coping with problems of scarcity, tracking the trajectories through local space and time of bundles of rewards is almost everything. Attention to wider informational dynamics in which processes are embedded typically delivers few if any additional payoffs, and may get in the way of pay off maximization because of computational costs. Therefore modelling causal relations as sequences collisions of objects in time is a sensible heuristic.” (Ladyman and Ross ‘Everything Must Go’ p. 280)”
They are defending Ontic Structural Realism (OSR) which tries to make sense of two conflicting arguments which pull us in different directions about whether we should be realists or anti-realists about science.
(1)The Pessimistic Meta Induction: argues that since science goes through various different revolutions which disregard entities which previous theories were committed to the existence of (e.g. Ether) which supposedly shows that theoretical terms are not actually picking out anything. Arguing inductively people claim that since many revolutions have occurred in the past and unless we assume (implausibly) that our current physics is complete then more revolutions will take place in the future. This shows that we are not justified in believing that our theoretical entities refer to anything. We should rather treat science as an instrumental mode of making accurate predictions: Feynman “Shut Up and Calculate”.
(2) The No Miracles Argument: If our theoretical terms do not pick out mind independent entities then this makes the success of science a miracle. Since there are no miracles in nature then anti-realism must be false. Therefore one should become a realist.
These two arguments seem to pull in different directions so OSR tries to solve the problem by saying that it is structure that is preserved across revolutions. A lot of direct reference theories in semantics e.g. Putnam (1975) and Kripke were designed to show how theoretical terms could refer across revolutions. One of the advantages of the direct reference theories is that they could explain reference to theoretical entities within our scientific theories in ways which are more plausible than the descriptivist approach favoured by Frege, Russell and Wittgenstein. Ross and Ladyman seem to think that their OSR can explain theory change in ways that don’t rely on our intuitions in the same way illicit way that Kripke etc do. In chapter 5 of ETMG ‘Causation in a Structural World’ Ladyman, Ross, and Spurret argue that at the level of fundamental physics we should be eliminativists about causation. This argument is accepted would have serious consequences for Davidson’s theory of Anomalous Monism in particular his first two premises which rely heavily on the notion of causation.
Ladyman et al. argue that one of the typical explanations of causation in philosophy is done in terms of micro banging of sub-atomic particles. However since fundamental physics does not justify us in postulating micro banging they claim that we are justified in rejecting explanations which use the unscientific notion of causation. By this they mean that in terms of our most basic ontology which is given to us by contemporary physics causation does not exist. However this fact doesn’t mean that causation cannot play a role in the explanations of the special sciences. They argue that causation in terms of the basic sciences should be thought of as follows:
“It is a concept that structures the notational worlds of observers who must book-keep real patterns from the perspectives that involve temporal and other asymmetries they cannot ignore on pain of discarding information” (ETMG p. 271)
So from the perspective of their scientism when it comes to the special sciences the concept of causation is pragmatically useful in helping us pick out real patterns in our environment. However at the most fundamental level of reality causation does not exist, rather it is what they call a representational real pattern.
They note that there are two main conceptions of causation one the folk psychological notion and two the scientific conception of causation (and possibly a third notion: causation as explicated by philosophers). We need to be careful when talking about causation to note which version of causation we are eliminating from our fundamental ontology (all versions) and which version we are preserving as a representational real pattern (the scientific conception). One of the primary conceptions of causation is the idea of it as a kind of cosmic glue that holds events together.
They follow Russell’s views on causation as expressed in his 1913 paper in the sense that they agree that causation finds no role in fundamental physics. However, Russell argued that because causation has no place in fundamental physics it should be eliminated from our ontology altogether. While as we saw above Ladyman et al argue that causation is a real pattern that helps us construct our theories in the special sciences even though it does not play any role in our fundamental physics. So they, unlike Russell, do not recommend out right elimination of causation.
As we noted above, we need to be careful when trying to analyse the notion of causation, and whether it is the folk psychological concept we are talking about, or the philosopher’s conception of causation. We know from studies in developmental psychology that as a matter of empirical fact during the normal course of development children do not pick out causal sequences in the same way as Hume believed. People do not need to for two events to be constantly conjoined to believe them to be causally connected. The work of Spelke shows that children from about 4 months have expectations about causation which include the singular causation. Ducasse has shown that students perceive causal patterns based on a single event. Obviously though our pre-theoretic intuitions of causation don’t necessarily tell us about the nature of mind independent reality rather they only tell us about how things seem to us. In fact our intuitions about causality vary depending on the culture we are born into (Cosmides 1989). This fact places serious doubt about whether our intuitions of causation are in any way accurate. From all of this they conclude that our human folk psychology of causation is not Humean, and it is too varied and idiosyncratic to be used as our scientific conception of causality in the special sciences. They follow Patrick Hayes in picking out three core features of folk knowledge (1) It construes causal relations as centred on some agent of change, (2) It postulates various transformative principles (often conceived of as forces) proceeding out from an agent to the recipient of causal influence. (3) It incorporates assumptions about time asymmetry: causal influences flow from the past into the future. (ETMG p. 282)
CAUSES IN SCIENCE
They begin by criticising Russell who argued that the notion of cause is never used by scientists. They did a corpus analysis of the journal SCIENCE over an 8 year period and found that the term ‘cause’ was used 90 times a month, while the term ‘effect’ was used even more. They argue that causation as used by scientists is not Humean causation of the simple regularities kind it is actually closer to the folk conception of causation. This they note leaves them with a problem:
“We have now argued our way into a tight corner. Getting out requires us to seek an account that (i) leaves scientists free to talk about causal processes without risk of embarrassment in the company of metaphysicians, but (ii) follows Russell in denying intuitions (folk or Humean) about causation any role in informing the metaphysical foundations of physics, or science more generally.” (ibid p.289)
Ultimately they reconcile the above seeming contradiction by arguing as follows:
“So causation, as it figures in science, is a notional-world construct. However, as we argued in 4.5, this notional world provides a useful heuristic for locating some real patterns. It is because special sciences are concerned with relatively isolatable regions of the universe which involve cross-classification from the perspective of less-than-fundamental but nevertheless limiting domains for them, that the causal relationships on which they variously focus will appeal to different aspects of their information-carrying potential”
“We think that Earman and Roberts are correct in their claim that fundamental physics discovers something of a kind that special sciences don’t; and we call this kind of something a universal real pattern. We argued in the preceding sections of this chapter for Russell’s and Redhead’s thesis that these real patterns are not causal. They are instead structural, in the sense articulated in Chapters 2 and 3.” (ibid p.290)
So given their argument and convincing demonstration that causation plays very little role in physics, one could argue, that they have cast doubt on Davidson’s first two premises in his argument for Anomalous Monism.
ANOMALOUS MONISM AND THE PRIMACY OF PHYSICS CONSTRAINT
(1)Mental Events are causally related to Physical Events.
(2)Singular Causal relations are backed by Strict Laws.
(3)There are no strict psychophysical laws.
It is important to go through Davidson’s argument in light of Ladyman et al. criticism of the scientific standing of Causation. When Davidson is talking about causal relations between events he is typically referring to macro level events as opposed to events at a microphysical level. Below are some examples of mental events he discusses in his paper ‘Mental Events’:
(1)The eruption of Vesuvius in 79 A.D. caused the destruction of Pompeii.
(2)His lighting the match caused the explosion.
(3)The next California earthquake will cause the destruction of the Golden Gate Bridge.
(4)The hurricane is causing the rise in the water level.
As can be seen none of the examples are events at the level of fundamental physics. Likewise when he speaks of mental events being causally related to physical events and vice-versa he does not use examples from fundamental physics. So, for example, he talks of how mental events like perceiving, believing, planning can cause physical events like the sinking of a ship. None of these examples are anything that Ladyman et al would have a problem with. They would cash out Davidson’s examples of causation as pragmatic constructs which help us navigate the world but which do not obtain at the fundamental level. From the point of view of OSR there should be no problem with premise 1 and saying that mental events like perceiving, intending etc are causally related to physical events like a ship sinking. Premise 2 that singular causal relations are backed by strict laws. Premise 2 amounts to nothing more than a guess. As we have seen above Davidson argues that these causal laws will be extensional, he distinguishes between description of causal laws and the laws themselves. However he has not demonstrated the existence of these causal laws and we have good reasons to think that causation does not obtain at the level of fundamental physics. At the macro-level scientists do indeed appeal to causal laws. However they justify these appeals internal to a theory. As Dennett argues that the intentional stance is justified as follows:
“a pattern exists in some data-is real-if there is a description of the data that is more efficient than the bit map, whether or not anyone can concoct it’ (1991a, 34). Thus there are (presumably) real patterns in lifeless parts of the universe that no actual observer will ever reach, and further real patterns whose data points are before our eyes right now, but which no computer we can instantiate or design will ever marshal the energy to compact.” (Quote taken from Ladyman and Ross)
Now in the case of intentional stance ascription as used by cognitive scientists we have good reasons to think that this requirement is met but we have no real reason to think that it is met at the level of ordinary folk-psychology ascriptions. Davidson never really deals with scientific/computational theories of psychology, all of his descriptions are from folk psychology and there is no good evidence that the generalisations of ordinary folk psychology have strict causal laws. So Davidson gets his third premise for free, however it follows trivially from the fact that there is no science of folk-psychology anymore than there is a science of folk-physics (obviously ethno-scientists study the type of folk beliefs that people hold) but we don’t expect folk-physics to be compatible with scientific physics and the same is true of folk psychology. So Davidson’s argument for Anomalous Monism doesn’t really succeed in any meaningful way.