INCOMPLETE NATURE, EMERGENCE AND CONSCIOUSNESS
INTRODUCTION
Terrence Deacon’s 2013 book ‘Incomplete Nature: How Mind Emerged From Matter’ has been generally greeted with acclaim by the philosophical community (With some exceptions see Mcginn and Fodor’s reviews). His book which attempts a naturalistic account of how mind emerged from matter is challenging and extremely well argued. In this blog I will try to summarise and analyse some of the primary arguments in Deacon’s book.
His ‘Incomplete Nature’ begins with an extremely interesting analogy. Deacon speaks about how for many years the number zero was banned from mathematics. Mathematicians feared zero because of its strange paradoxical characteristics so they did not admit it into their mathematical systems. Deacon begins his book with the following quote:
“In the history of culture, the discovery of zero will always stand out as one of the greatest single achievements of the human race. (Tobias Dantzig)
Deacon argues that a similar situation now occurs for what he calls ententional phenomena. The word ‘ententional’ was coined by Deacon as a generic adjective to describe all phenomena that are intrinsically incomplete in the sense of being in relationship to, constituted by, or organized to achieve something non-intrinsic. This includes function, information, meaning, reference, representation, agency, purpose, sentience, and value (Incomplete Nature p.549). Deacon argued that ententional phenomena like the concept zero have a primary quality of absence. These absential features mean the relevant phenomena are determined with respect to an absence. So, for example our intentional states are directed towards some state of affairs which may or not obtain so it has an absential. These absential qualities are something that science up until now has found it impossible to give a causal account of. This is a serious state of affairs because something like my ‘belief that P’ can have obvious causal consequences, e.g. me moving my body to position x. Deacon correctly notes that ententional phenomena are treated by scientists in a similar way to the way mathematicians treated zero. Scientists and philosophers treat ententional phenomena as something spooky and paradoxical which need to be contained or outright eliminated from our ontology. Deacon proposes that admitting ententional phenomena into our ontology will greatly increase our theoretical powers in science in a similar way to the way admitting zero in to mathematics greatly increased our calculation capacities. So the aim of his book is to admit the reality of ententional phenomena and show how they arrived onto the scene naturalistically. He argues that his theory can explain how a form of causality dependent on absential qualities can exist; he claims that this explanation is compatible with our best science.
It is interesting to briefly compare Deacon’s view with the views of Ladyman et al as expressed in their ‘Everything Must Go’ which I discussed in my last blog. While Ladyman et al treat ententional phenomena as real patterns but patterns which do not exist in our fundamental ontology; Deacon treats ententional phenomena in a more realistic way. So, for example on p.38 he talks of the universe as being a causally closed system. (All basic causal laws of the universe also follow a closed system, all change comes from within). This seems radically at odds with Ladyman and Ross who argue that causality plays no role at the level of basic physics, though it does play a role in the special sciences. However a closer examination of Deacon’s views shows that his views on causation are actually pretty close to those of Ladyman et al.
Deacon outlines the thesis of his book as follows:
“As rhetorically ironic as this sounds, the thesis of this book is that the answer to the age-old riddle of teleology is not provided in terms of “nothing but…” or “something more…” but rather “something less…” this is the essence of what I am calling absentialism” (Incomplete Nature p.43)
When speaking of his explanations in term of absence he makes claims about processes and substances which are striking analogous to claims made by Ladyman et al:
“Showing how these apparently contradictory views can be reconciled requires that we rethink some very basic tacit assumptions about the nature of physical processes and relationships. It requires reframing the way we think about the physical world in thoroughly dynamical, that is to say, process, terms, and recasting our notions of causality in terms of something like the geometry of this dynamics, instead of thinking in terms of material objects in motion affected by contact and fields of force” (ibid p. 44)
He argues that while it is intuitive to think of the world in terms of billiard ball causality, when it comes to basic physics this notion has been abandoned in terms fields of probability rather than discretely localizable stuff. It is failure to overcome this billiard ball conception of stuff which leads to people to think that something must be added to make a mind.
He argues that our ultimate scientific challenge is to precisely characterize the geometry of dynamical from thermodynamic processes to living mental processes, and to explain their dependency relationships with respect each other. It is worth at this point considering Deacon’s criticisms of the metaphysical views of Jaegwon Kim.
DEACON ON EMERGENTISM AND KIM
Deacon argues that the reductionism which began with Democritus and produced many successful research programmes, for example, the reduction of chemistry to physics, is favoured by most contemporary theorists. This view has lead a lot of philosophers to think that smaller is more fundamental (philosopher Thomas Wilson dubbed this “smallism”). Deacon correctly notes that this view is not necessarily true:
“It is not obvious, however, that things do get simpler with a descent in scale, or that there is some ultimate smallest unit of matter, rather than merely a level of scale below which it is not possible to discern differences.” (Incomplete Nature p.153)
Nonetheless he points out that a lot of science has been successful as a result of thinking of objects in terms of their component parts. He claims that, despite the success that of thinking of objects as made of component parts has had in science; it has the difficulty that it focuses attention away from the contributions of interaction complexity. This, he complains, suggests that investigating the organisational features of things is less important than investigating its component properties.
When discussing emergence Deacon gives Sperry’s example of how consciousness is a product of certain configurations of matter which are only found in the brain. Though this is not a new type of matter the configuration can have certain causal consequences that other configurations of matter do not have. Sperry uses the analogy of how a wheel is a particular configuration of matter. This configuration does not involve new stuff but it does have causal consequences that are unexpected; i.e. the wheel can move in ways that other configurations of matter cannot. So Deacon thinks we can have emergentism without lapsing into any kind of mysterious dualism.
He notes that Jaegwon Kim is generally considered to have refuted certain forms of emergentist theories. He sums up Kim’s argument as follows:
“Assuming that we live in a world without magic, and that all composite entities like organisms are made of simpler components without residue, down to some ultimate elementary particles, and assuming that physical interactions ultimately require that these constituents and their causal powers (i.e. physical properties) are the necessary substrate for any physical interaction, then whatever causal properties we ascribe to higher-order composite entities must ultimately be realized by these most basic physical interactions. If this is true, then to claim that the cause of some state or event arises at an emergent higher level is redundant. If all higher-order causal interactions are between objects constituted by relationships among these ultimate building blocks of matter, then assigning causal power to various higher-order relations is to do bookkeeping. It’s all just quarks and gluons- or pick your favourite smallest unit- and everything else is just a gloss or descriptive simplification of what goes on at that level. As Jerry Fodor describes it, Kim’s challenge to emergentists is: why is there anything except physics?” (Terrance Deacon ‘Incomplete Nature’ 2013 p.165)
Deacon claims that Kim’s challenge can be attacked from the point of view of contemporary physics. He argues that the substance metaphysics which Kim uses to support his mereological analysis is not supported by quantum physics. Deacon’s point is similar to a point made by Ladyman et. al in their ‘Everything Must Go’, while Tim Maudlin has made similar points in his ‘The Metaphysics Within Physics’. The problem which all those theorists note is that there are not any ultimate particles or simple ‘atoms’ devoid of lower level compositional organisation on which to ground unambiguous higher-level distinctions of causal power (ibid p.167).
On page 31 of their ‘Every Thing Must Go’ Ladyman et al. also attack Jagewon Kim’s 1998 book ‘Mind and World’ for claiming to be a defence of physicalism despite the fact that it doesn’t engage with any contemporary physics. They note that there are no physics papers or physics books even cited by Kim in his book. This is besides the fact that Kim’s arguments rely on un-trivial assumptions about how the physical world works (ETMG p.31). According to Ladyman et al. Kim defines a ‘micro-based property’ which involves the property of being ‘decomposable into non-overlapping proper parts’ (Kim 1998 p. 84 (quote taken from ETMG). This assumption does a lot of Kim’s work in defending his physicalism, however as we know from Quantum Entanglement, micro-components of reality are not decomposable in this way. They explicate their criticism of Kim as follows:
“Kim’s micro-based properties, completely decomposable into non-overlapping proper parts, and Lewis’s ‘intrinsic properties of points-or of any point sized occupants of points’, both fall foul of the non-separability of quantum states, something that has been a well-established part of micro-physics for generations” (ETMG p. 32)
Ladyman et al are attacking Kim’s metaphysics because it is done without regard to our best fundamental physics. As we can see their criticism of Kim is entirely in agreement with Deacon’s criticism. Deacon, however is not merely criticising Kim for ignoring contemporary quantum physics in metaphysics. He is making the further claim that contemporary quantum physics shows that Kim’s arguments against reductionism do not work.
Deacon correctly argues that in quantum physics when we get to the lowest level we have quantum fields which are not dividable into point particles. He notes that Quantum fields have ambiguous spatio-temporal origins, have extended properties that are only statistical and dynamically definable, etc. So given the fact that quantum world behaves in this way Kim’s analysis into mereological (part/whole) features fails
Kim’s in effect argued that since at bottom it is all just quarks and gluons, and that everything else is just a descriptive simplification of what goes on at that level, then we are forced to ask, why is there anything other than physics? Deacon notes that the emergentism which Kim is criticising is connected to the idea of supervenience which argues that “There cannot be two events exactly alike in all physical respects but differing in some mental respects, or that an object cannot alter in some mental respects without altering in some physical respect” (Davidson 1970). People who argue for emergence, have to account for how something can emerge which is entirely dependent on the physical; but is not reducible to it. Kim thinks that his argument shows that this type of emergence is not possible.
Kim in effect argues that since there can be no differences in the whole without differences in the parts then emergence theories cannot be true. Kim claims that in our theory we want to avoid double-counting. So if we map all causal powers to distinctive non-overlapping parts of things this leaves no room to find them uniquely emergent, in aggregates of these parts, no matter how they are organised (Incomplete Nature p.167)
Now Deacon replied to this by criticising Kim for not keeping up with developments in quantum mechanics. Philosopher Paul So made the following reply to Deacon: “What I do remember is that Kim’s problem with Emergentism is that emergent phenomena like mental states may not have causal powers. Specifically, If mental states supervene (or ontologically depend) on physical ones, then its really physical ones that do all the causal work, whereas mental states just appear to have causal powers. I agree that Kim may need to learn more quantum mechanics to substantiate his claim, but I don’t think Deacon’s objection really refutes his primary concern; I don’t think Kim’s assumption about particles being fundamental constituents of the physical world is essential to his concern. What is essential to his concern is how an emergent phenomena like mental states can do any causal work if its really the phenomena from fundamental physics that do all the work. We could just replace particles with quantum fields and his concern could still stand. If emergent phenomena such as our mental states ontologically depend on quantum fields, then it appears that quantum fields do all the causal work (though in a statistical manner).” (Paul So personal communication)
Now I should first note that Deacon’s concern is with the fact that Kim’s argument is dependent on a part-whole analysis, and this part/whole analysis is simply impossible at the quantum level. It is only at the macro-level that we can do the type of mereological analysis that Kim suggests. Deacon discusses the philosopher Mark Bickhard who argues against Kim:
“A straight forward framing of this challenge to a mereological conception of emergence is provided by cognitive scientist and philosopher Mark Bickhard. His response to this critique of emergence is that the substance metaphysics assumption requires that at the base, particles participate in organisation but they do not themselves have organisation. But, he argues, point particles without organisation do not exist because real particles are the somewhat indeterminate loci of inherently oscillatory quantum fields. These are irreducibly process like and thus by definition organised. But if process organization is the irreducible source of the causal properties at this level, then it cannot be delegitimated as a potential locus of causal power without eliminating causality from the world. It follows that if the organisation of a process is the fundamental source of its causal power, then fundamental reorganisations of process, at whatever level this occurs, should be associated with a reorganisation of causal power as well” (ibid p.168)
The above quote shows why Paul’s objection doesn’t refute Deacon’s argument. However, it is worth noting that simply because, Kim hasn’t refuted Deacon this doesn’t vindicate emergentism; rather he merely shows that a particular argument does not work. He will need a much more extensive discussion of the philosophers who reject emergence if his positive thesis is to be sustained.
DEACON ON AUTOGENS
“The reciprocal complementarity of these two self-organizing processes creates the potential for self-repair, self-reconstitution, and even self-replication in a minimal form” (Incomplete Nature p. 306)
So what we are looking at here is a co-facilitation of morphodynamic processes. By an autogen Deacon means the whole class of minimally related teleodynamic systems. Something is an autogen if it is a simple dynamical system that achieves self-generation by harnessing the co-dependent reciprocity of component morphodynamic processes. He notes that though considered in isolation or co-dependent, together they are reciprocally self-limiting, so that their self-undermining features are reciprocally counteracted. As a result an autogenic system will establish its capacity to re-form before exhausting its substrates, so long as closure is completed before reaching this point (ibid p.308). The two processes provide boundary conditions for both providing a supporting environment for each other.
Deacon also argues that an autogen can also reproduce. He argues that fractured components of a disrupted autogen will be able to create new autogens by the same process with which the autogen was created. In trying to think through how the first life arose from non-life Deacon begins with the notion of Autocatalysis. Typically a closed molecular system will tend toward some steady state, with fewer and fewer chemical reactions occurring over time, as the overall distribution of reactions runs in directions that offset each-other – this is the second law of thermodynamics at work (ibid p. 293). Deacon notes that with chemical systems maintained far from equilibrium, where the conditions for asymmetric reaction probabilities are not reduced, non-equilibrium dynamics can produce some striking features (ibid p.293). Deacon claims that the most relevant classes of non-equilibrium chemical process is autocatalysis.
A Catalysis is a molecule that because of its allosteric geometry and energetic characteristics increases the probability of some other chemical reaction taking place without itself being altered in the process. (ibid p. 293), hence it introduces a thermodynamic element into a chemical reaction as a consequence of its shape with respect to other molecules.
An Autocatalysis is a special case of catalytic reactions in which a small set of catalysts each augment the production of another member of the set, so that ultimately all members of the set are produced. This has the effect of creating a runaway increase of the molecules of the autocatalytic set at the expense of other molecular forms, until all substrates are exhausted. Autocatalysis is thus briefly a self-amplifying chemical process that proceeds at ever-higher rates, producing more of the same catalysts with every iteration of the reaction. Deacon notes that according to some people autocatalysts are extremely rare (though Stewart Kauffman has argued that they are not a rare as believed). In fact Kauffman has argued that Autocatalysis is inevitable under not too extreme conditions. Manfred Eigen has studied hyper-cycles where autocatalysts lead to more autocatalysts. Deacon notes that obviously for such autocatalysts to occur we need a rich substrate to keep things going. When the raw materials are used up the interdependent catalysts dissipates. So he argues Autocatalysis is SELF-PROMOTING, but not SELF-REGULATING or SELF MAINTAINING. These catalytic networks of molecular interactions characterize the metabolic processes of living cells, so it is far from impossible.
Deacon begins his discussion of evolution by quoting from Batten, et al’s 2009 paper ‘Visions of Evolution: Self Organisation Proposes What Natural Selection Disposes.’. Deacon argues that while a lot of evolutionary theory tells the story of random mutations and natural selection being the primary story in evolution, there is growing evidence that self organisation plays as big a role as random mutations. Deacon correctly argues that strictly speaking the theory of evolution is substrate neutral (hence a precursor to functionalism), so it doesn’t really matter to selection whether it is working on a change as a result of self-organizing process or a change as a result of random mutation.
He argues that this substrate neutrality has clear implications for emergence. If like Dennett we think of evolution as an algorithmic process that can be implemented on various different machines we can the relevance of it for emergence. When a particular functional organisation is selected over thousands upon thousands of years the same physical process is not necessarily used all of the time. He likens this the way that 2+2=4 can be calculated using different devices, a human brain, a calculator, a page, fingers, etc. Likewise an adaptation is not identical to the collection of properties constituting the specific mechanism that constitutes it (ibid p.425). In some ways it is less than the collection of properties, as only some of the properties are relevant to the success of an adaptation. While in some ways an adaptation is more than its properties, as it is the consequence of an extended history of constraints being passed from generation to generation (constraints that are the product of many different substrates) (ibid. p.425)
Later Deacon describes Evolution as follows:
“Evolution in this sense can be thought of as a process of capturing, taming and integrating diverse morphodynamic processes for the sake of their collective preservation.” (ibid p. 427)
He gives some clear examples of morphodynamic processes:
(1) Whirlpools, (2) Convection Cells, (3) Snow Crystal growth (4) Fibonacci structures in inorganic matter.
And he argues that the above morphodynamic processes are interesting because:
“…What makes all these processes notable, and motivates the prefix morpho- (form), is that they are processes that generate regularity not in response to the extrinsic imposition of regularity, or by being shaped by any template structure, but rather by virtue of regularities that are amplified internally via interaction dynamics alone under the influence of persistent external perturbations.” (ibid p.242)
When discussing Autogens Deacon does seem to be invoking a kind of intrinsic intentionality when cashing out what he believes are the earliest forms of information (in his sense) using Autogens. Deacon argues that early autogens had information (in the sense of aboutness) because a particular form of chemical bond was sensitive to what environment was good for it. This is a strange argument and is worth looking at closely.
“They argue that an autogeneic system in which its containment is made more fragile by the bonding of relevant substrate molecules to its surface could be considered to respond selectively to its environment” (ibid p.443)
Here the use of the word information (as in aboutness) is extremely strange. In what sense could the bonding of substrate molecules to the surface of an autogen convey information? Deacon notes that this bonding (or lack thereof ) will have consequences for whether particular autogens survive in particular environments.
So, for example, if the autogen’s containment is disrupted in the context where the substrates that support autocatalysis are absent or of low concentration, re-enclosure will be unlikely. So stability of containment is advantageous for persistence of a given variant in contexts where the presence of relevant substrates is of low probability. He correctly notes that this is an adaptation. (So different autogens are selected by different environments). While these autogens cannot initiate their own reproduction (their differential susceptibility to disruption with respect to relevant context is a step in that direction) (ibid p.443)
“It seems to me that at this stage we have introduced an unambiguous form of information and its interpretative basis. Binding of relevant substrate molecules is information about the suitability of the environment for successful replication; and since successful replications increases the probability that a given autogenic form will persist, compared to other variants with less success at replication, we are justified in describing this as information about the environment for the maintenance of this interpretative capacity.” (ibid p. 443)
I suppose one could say that it is indeed information in Bateson’s sense (a difference that makes a difference) but there is nothing at that point that HAS the information, we see the information as theorists, but the information is not grasped by anything. We could view the information as a real-pattern that exists whether or not there is anyone around to observe it.
Deacon then asks the obvious question: what is the difference between the information which the autogen has, and the information of say a thermostat? He says that in the absence of a human designer a thermostat only has Shannon Information. His argument is that while things like thermostats provide information about particular aspects of their environments this is not a significant as it may seem. A wet towel can provide information about the temperature of a room, or the tracking of dirt in from the street has the capacity to indicate that somebody has entered the room. He notes that while things like wet towels can provide information as to the temperature of the room, there an infinite number of other things that the wet towel is informative about depending on the interpretative process used. He argues that that is the KEY DIFFERENCE. What information the physical process provides depends on the idiosyncratic needs and interests of the interpreter. So, he claims, that there is no INTRINSIC END DIRECTEDNESS to these mechanisms or physical processes.
He makes the startling claim that this INTRINSIC END DIRECTEDNESS is provided by autogens tendency to reconstitute itself after being disrupted. This seems like the dreaded teleology being introduced for even simple non-living autogens. His argument involves noting that the autogen is tending towards a specific target state, it also tends to develop towards a state that reconstitutes and replicates this tendency as well. This tendency has normative consequences, autogens can receive misinformation (bind with other molecules that are not catalytic substrates and will weaken it). Such autogens lineages will tend to be wiped out. So where there is information (non-shannonian) there is the possibility of error (because aboutness is an extrinsic relationship and so is necessarily fallible).
I am not sure what to make of his claims here to be honest. For me Dennett put paid to the idea of intrinsic aboutness as far back as his (1987) ‘Error, Evolution, and Intentionality’ where he attacks Fodor’s notion of intrinsic intentionality. It seems to me that his arguments go through against Deacon without modification. If Deacon is arguing that the normativity of these autogens is a ‘real-pattern’ I would have no problem but I am having trouble understanding his idea of INTRINSIC END DIRECTEDNESS.
Deacon not only speaks of autogens having relations of aboutness to the world, he goes further and argues that autogens have a type of consciousness. He even argues that individual neurons in our brain are conscious:
“The central claim of this analysis is that sentience is a typical emergent attribute of any teleodynamic system… A neuron is a single cell, and simpler in many ways than almost any other single-cell eukaryotic organisms, such as an amoeba. But despite its dependence on being situated within a body and within a brain, and having its metabolism constantly tweaked by signals impinging on it from hundreds of other neurons, in terms of the broad definition of sentience I have described above, neurons are sentient agents” (Ibid pp.509)
This claim of course means that for Deacon an autogen is conscious. Now in above we noted that for Deacon an autogen has intrinsic end directedness because there are aspects of its environment that are good for its survival and reproduction. Deacon has here offered an interesting proposal as to how consciousness and aboutness emerges that manages to avoid panpsychism, or the exceptionalism which claims that only humans are consciousness and it magically emerges for us and us alone, or eliminativism which says that experience does not exist. However, I don’t think he has really offered any evidence to support his theory over the theories of his rivals. Though it is certainly an interesting proposal that if researched further could lead to answers.
CONSCIOUSNESS AND PAIN
As Deacon’s key exemplar of consciousness he focuses on Pain and Suffering. I think this is an interesting and useful starting point. In Dennett 1978 ‘Why you can’t make a computer that feels pain’, he tried to discuss whether it would be possible to build a computer that feels pain. Dennett’s conclusion was that you couldn’t build a computer that feels pain because the ordinary language conception of pain is radically incoherent. So if we try to model pain (as a coherent phenomena) we will have to radically modify the ordinary language concept, once this concept is so modified that we can model it in a computer, it will not share all of the features that we typically associate with pain. So it is no good a theorist saying that a computer does not experience pain because intuitively I think that pain must have properties x and y, because we know that people’s intuitions on the nature of pain form a radically incoherent set. Most people upon reading Dennett respond that an ESSENTIAL feature of pain is its intrinsic horrible feature. So since there is no evidence that computers feel pain in this ESSENTIAL sense then we cannot build a computer that feels the essential characteristic of pain its awfulness. Now Dennett could reply to this by arguing that pain has no essential features. He could point to congenital asymbolia where people feel pain but do not mind the pain (hence don’t find it intrinsically awful), to show that pain does not have an essential feature of awfulness. Likewise he could point to the fact that people who have congenital analgesia do not experience pain. Now what is interesting about this disorder is that people who suffer from it typically injure themselves at night because they do not adjust their body when asleep. Now people without the disorder typically adjust their bodies if in an uncomfortable position (even while asleep), which indicates that we can feel pain even while asleep. It is hard to know what to make of these claims but Dennett notes that the fact that we can have unconscious pains, and pain which doesn’t feel awful indicates that we should not comfortably speak of essential features of pain. Hence he thinks we should not let our intuitions of the supposedly essential features of pain being un-modelable in a computer too seriously. Deacon agrees with some of Dennett’s opponents that we cannot have a computer that feels pain. But Deacon’s reasons are not mere reliance on ones intuitions about the nature of pain. Deacon describes emotion as follows:
“In the previous section, we identified the experience of emotion with the tension and work associated with employing metabolic means to modify morphodynamics. It was noted that particularly in cases of life-and-death contexts, morphodynamic change must be instituted rapidly and extensively, and this requires extensive work at both the homeodynamic (metabolic) and morphodynamic (mental) levels. The extent of this work and the intensity of the tension created by the resistance of dynamical processes to rapid change, is I submit experienced as the intensity of emotion” (Incomplete Nature p. 527)
So for Deacon for a machine to experience emotion it needs work (the tension associated with fighting against the 2nd law of thermodynamics), so a machine in the sense of Dennett 1978 (which is just a description of a physical regularity); cannot experience emotion.
Of course Daniel Dennett today may has moved closer to Deacon’s views, as can be seen in his talk ‘If Brains are Computers what Kind of Computer are they?’ http://youtu.be/OlRHd-r2LOw . Dennett’s talk of selfish neurons forming a collation with each other is very close to Deacon’s views on the nature of neuronal activation.
Throughout the book Deacon makes claims about the nature of possibility that need to be thought through very carefully as it is hard to square with physics, biology, or anything else; is he arguing for a possible world type scenario? Does his view commit him to modal realism a la Lewis? Or is it the type of possible world proposed by Kripke? In ETMG Ladyman et al discuss modal realism one wonders whether Deacon’s (who is a naturalist) is defending a similar type of modal realism as them. Unfortunately though one of the central ideas in the book is that what doesn’t happen, or could have happened but didn’t effects the physical world, Deacon didn’t really explicate what he means by possibility. So I need to think through what the metaphysics of possibility implied by his thesis really amounts to. Thinking through and comparing Williamson’s views on the nature of possibility as sketched in his new book ‘Modality as Metaphysics’, with Ladyman et al’s OSR and Deacon’s Incomplete Nature may be helpful.
In this blog I have merely described some of Deacon’s main arguments to defend his position on how mind emerged from matter, and gestured towards some strengths and weakness in his view. In my next blog I will discuss his conception of information, and modality in detail and compare his views with those of Ladyman et al.
I think that his book is a useful starting point in helping us think through these complex issues. Though aside from the need to factor thermodynamics into our computational theories of how the mind and life work, I think that he didn’t provide sufficient evidence to support his claims of Autogens having end-directedness or sentience. That said he book represents an interesting attempt to deal with some very old and intractable problems.
Structure of Blog
(1) Introduction discussion of the Zero analogy. Brief comparison with Ladyman et al. ‘Everything Must Go’.
(2) Deacon on Kim on emergentism. Kim famous philosophical argument against emergentism is analysed by Deacon. Deacon shows that the argument does not work if one takes into account of contemporary quantum mechanics. With Kim’s argument refuted Deacon sets out to describe his conception of emergent phenomena.
(3) Deacon on Autogens and how sentience and aboutness emerges with them. Here I argue that Deacon raises an interesting way of thinking about these issues but he is far from proving anything about Autogens.
(4) Consciousness and Pain: A discussion of Pain, Dennett, and Computers. Again Deacon offers us an interesting way of thinking about things here but it is a very very open question as to whether he is correct.
(5) Deacon on Possibility and the ambiguous nature of possibility as used by him. Here I argue that Deacon needs to engage with the philosophical literature on possibility as I think his views on possibility are too vague to be evaluated for truth value.
David said: <>
— Hey that sounds like Jim Hamlyn talking. 🙂
David quotes Deacon here:
<>
Then David critiques the quoted passage:
“This claim of course means that for Deacon an autogen is conscious. Now in above we noted that for Deacon an autogen has intrinsic end directedness because there are aspects of its environment that are good for its survival and reproduction. Deacon has here offered an interesting proposal as to how consciousness and aboutness emerges that manages to avoid panpsychism, or the exceptionalism which claims that only humans are consciousness and it magically emerges for us and us alone, or eliminativism which says that experience does not exist. However, I don’t think he has really offered any evidence to support his theory over the theories of his rivals. Though it is certainly an interesting proposal that if researched further could lead to answers.”
But I don’t think that Deacon ever intended to reveal or believes he understands (technically) how conscious sentience emerges. With regard to neurons, etc., he is talking about “the broad definition of sentience *I have described above*”. And that broad definition concerns what he calls “vegetative sentience”. I have no problem with such a term. Deacon concludes that if emergent dynamic systems can arise from disorganized thermodynamic entropy all the way to teleodynamic systems that appear to act in their self-interests (which he calls having “vegetative sentience”); then with the emergence of yet higher levels of teleodynamic systems, fully fledged consciousness will emerge. But Deacon only limits the construction of his theory up to systems with vegetative sentience. I don’t believe he ever tackles conscious sentience, he only points the way.
Deacon says:
“The central claim of this analysis is that sentience is a typical emergent attribute of any teleodynamic system. But the distinct emergent higher-order form of sentience that is found in animals with brains is a form of sentience built upon sentience. So, although there is a hierarchic dependency of higher-order forms of sentience on lower-order forms of sentience, there is no possibility of reducing these higher-order forms (e.g., human consciousness) to lower-order forms (e.g., neuronal sentience, or the vegetative sentience of brainless organisms and free-living cells). This irreducibility arises for the same reason that teleodynamic processes in any form are irreducible to the thermodynamic processes that they depend on.”
Deacon, Terrence W. (2011-11-21). Incomplete Nature: How Mind Emerged from Matter (p. 508). W. W. Norton & Company. Kindle Edition.
For some reason my copy paste left out the quotes so here they are:
David said: <>
— Hey that sounds like Jim Hamlyn talking. 🙂
David quotes Deacon here:
<>
I will try again!!!
David said: “I suppose one could say that it is indeed information in Bateson’s sense (a difference that makes a difference) but there is nothing at that point that HAS the information, we see the information as theorists, but the information is not grasped by anything. We could view the information as a real-pattern that exists whether or not there is anyone around to observe it.”
— Hey that sounds like Jim Hamlyn talking. 🙂
David quotes Deacon here:
“But despite its dependence on being situated within a body and within a brain, and having its metabolism constantly tweaked by signals impinging on it from hundreds of other neurons, in terms of the broad definition of sentience I have described above, neurons are sentient agents” (Ibid pp.509)”
David said: “I suppose one could say that it is indeed information in Bateson’s sense (a difference that makes a difference) but there is nothing at that point that HAS the information, we see the information as theorists, but the information is not grasped by anything. We could view the information as a real-pattern that exists whether or not there is anyone around to observe it.”
— Hey that sounds like Jim Hamlyn talking. 🙂
David quotes Deacon here:
“But despite its dependence on being situated within a body and within a brain, and having its metabolism constantly tweaked by signals impinging on it from hundreds of other neurons, in terms of the broad definition of sentience I have described above, neurons are sentient agents” (Ibid pp.509)”
Then David critiques the quoted passage:
“This claim of course means that for Deacon an autogen is conscious. Now in above we noted that for Deacon an autogen has intrinsic end directedness because there are aspects of its environment that are good for its survival and reproduction. Deacon has here offered an interesting proposal as to how consciousness and aboutness emerges that manages to avoid panpsychism, or the exceptionalism which claims that only humans are consciousness and it magically emerges for us and us alone, or eliminativism which says that experience does not exist. However, I don’t think he has really offered any evidence to support his theory over the theories of his rivals. Though it is certainly an interesting proposal that if researched further could lead to answers.”
But I don’t think that Deacon ever intended to reveal or believes he technically understands how conscious sentience emerges. With regard to neurons, etc., he is talking about “the broad definition of sentience I have described above”. And that broad definition concerns what he calls “vegetative sentience”. I have no problem with such a term. Deacon concludes that if emergent dynamic systems can arise from disorganized thermodynamic entropy all the way to teleodynamic systems that appear to act in their self-interests (which he calls having “vegetative sentience”); then with the emergence of yet higher levels of teleodynamic systems, fully fledged consciousness will emerge. But Deacon only limits the construction of his theory up to systems with vegetative sentience. I don’t believe he ever tries to tackle conscious sentience, he only points the way.
Deacon says:
“The central claim of this analysis is that sentience is a typical emergent attribute of any teleodynamic system. But the distinct emergent higher-order form of sentience that is found in animals with brains is a form of sentience built upon sentience. So, although there is a hierarchic dependency of higher-order forms of sentience on lower-order forms of sentience, there is no possibility of reducing these higher-order forms (e.g., human consciousness) to lower-order forms (e.g., neuronal sentience, or the vegetative sentience of brainless organisms and free-living cells). This irreducibility arises for the same reason that teleodynamic processes in any form are irreducible to the thermodynamic processes that they depend on.”
Deacon, Terrence W. (2011-11-21). Incomplete Nature: How Mind Emerged from Matter (p. 508). W. W. Norton & Company. Kindle Edition.