Home » Intentional Stance critique

Intentional Stance: Dennett’s 1 vital error is Searle’s 1 critical omission

Through impartial analysis of Daniel Dennett’s ‘The Intentional Stance’ (1987), I identify several inconsistencies in his position on intentionality. Then, I reveal a fatal flaw in the stance, but rather than throw the baby out with the bath water, make some suggestions as to how failings can be redressed. For ease of referencing, quotations have been colour coded to help identify their source. There are 3 main parts to this article. Sections 1 to 6 are a series of detailed analyses with figurative illustrations that are intended to identify, what might be called, a creeping augmentation of meaning. Sections 7 to 10 narrow down the analysis. Finally, the last part is entitled “Of the baby and the bath water” which reveals Dennett’s one vital error.
You can also access this article as a pdf

******

1. Dennett’s statement of intent in ‘The Intentional Stance’ (TIS):
    I will argue that any object – or as I shall say, any system – whose behavior is well predicted by this strategy is in the fullest sense of the word a believer. What it is to be a true believer is to be an intentional system, a system whose behavior is reliably and voluminously predictable via the intentional strategy. p.15 TIS

To paraphrase:
Any object whose behaviour is reliably and voluminously predicted by the intentional strategy is an “intentional system”.

Note 1.1: the “any object – or as I shall say, any system” phrase, will be evaluated later.
Note 1.2: “intentional system” is the name assigned to any object defined by Dennett’s intentional strategy. The assignation is, therefore, a label given to any such object which does not actually possess intentionality until or unless Dennett shows otherwise.
Note 1.3: “What it is to be a true believer” (Dennett’s emphasis) is to be so defined by Dennett’s ‘intentional strategy’: “to be a true believer”, may be construed incorrectly as equating to the unequivocal possession of belief, but this would be at odds with the ‘intentional strategy’s’ main tenet of treating objects merely “as if” they have belief. The perverse claim remains: all there is to being a true believer is being a system whose behavior is reliably predictable via the intentional strategy, and hence all there is to really and truly believing that p (for any proposition p) is being an intentional system for which p occurs as a belief in the best (most predictive) interpretation. p.29 TIS

2. Of what does the intentional strategy consist?
    the intentional strategy consists of treating the object whose behavior you want to predict as a rational agent with beliefs and desires and other mental states exhibiting what Brentano and others call intentionality. p.15 TIS

Note 2.1: the term “treating the object” indicates only ascribing ‘intentionality’. This ascription is made explicit only on page 22 (see red quote below – just above fig.1). However, Dennett is more explicit in Intentional Systems Theory (IST) through the use of the clause “as if”: The intentional stance is the strategy of interpreting the behavior of an entity (person, animal, artifact, whatever) by treating it as if it were a rational agent who governed its ‘choice’ of ‘action’ by a ‘consideration’ of its ‘beliefs’ and ‘desires.’ p.1 Intentional Systems theory (IST), 2009
Thus ‘the object’ need not possess rationality, beliefs or desires, nor need choose or consider action. It merely must appear to do so, and in so doing, will only by appearance, seem to possess intentionality.
Note 2.2: It is curious that in the earliest 1971 publication entitled “Intentional Systems” Dennett is very explicit in regard note 2.1:
Lingering doubts about whether the chess-playing computer really has beliefs and desires are misplaced; for the definition of Intentional systems I have given does not say that Intentional systems really have beliefs and desires, but that one can explain and predict their behavior by ascribing beliefs and desires to them, p.91 IS
and
All that has been claimed is that on occasion a purely physical system can be so complex, and yet so organized, that we find it convenient, explanatory, pragmatically necessary for prediction, to treat it as if it had beliefs and desires and was rational. p.91 IS
It is not obvious why Dennett chooses to be more vague about this issue in TIS.

“I think you have a hard time taking seriously my resolute refusal to be an essentialist, and hence my elision (by your lights) between the “as if” cases and the “real” cases. When I put “really” in italics, I’m being ironic, in a way. I am NOT acknowledging or conceding that there is a category of REAL beliefs distinct from the beliefs we attribute from the intentional stance. I’m saying that’s as real as belief ever gets or ever could get.”
Dennett 22, Feb 2014 (private correspondence)

3. What does the intentional strategy entail?
    first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. p.17 TIS

Note 3.1: the phrase “as a rational agent” does not mean that one can ‘assume rationality’, but that the object is to be ‘treated as if’ it were rational.
Note 3.2: the phrase “ought to have” is therefore a proviso of assumed rationality. For something to appear to be rational, according to Dennett, means for that something to also appear to possess desires, beliefs and goals. These apparent intentional attributes are a pretence to satisfy the requirements of the intentional strategy; since, to apply the intentional stance is to treat the object under observation as if it has intentionality, desires, beliefs and goals, and to treat it as if it were rational.
Note 3.3: the phrase “given… its purpose” suggests, ‘given the purpose that it (the object) possesses’. However, one can argue alternatively that either an object’s behaviour may indicate purpose – that an object may implement purpose by design – or that an object may have purpose. Consequently, the term “purpose” can easily be misconstrued as indicative of an intrinsic intentionality when used figuratively in this manner.
With regard to purpose, one may note that Dennett stipulates, it can never be stressed enough that natural selection operates with no foresight and no purpose p.299 TIS but yet,
we may call our own intentionality real, but we must recognize that it is derived from the intentionality of natural selection. p.318 TIS
In these two notions there is the idea that natural selection has no foresight and no purpose but that we must also recognise, if we are to say that our own intentionality is real, that natural selection has given rise or is responsible for it. Additionally, Dennett suggests that evolution is a process of design but that selection itself has no foresight and no purpose.
we are really well design by evolution p.51 TIS; and
the environment, over evolutionary time, has done a brilliant job of designing us. p.61 Elbow Room (ER)
This notion of purpose and design will be evaluated more fully later.

4. Scrutinising hiatus

By page 22 in TIS, Dennett has laid the ground work:
The intentional strategy is a third-person examination of objects (cf. p.7 TIS, and p.87 IS) and entails treating them as if they possess intentionality with beliefs, desires, goals and purpose. Therefore, the ascription of intentionality is a pretence: Dennett is not committing to a view that intentionality either does or does not truly exist but that we can assume it does exist in any object, be it mental or otherwise.

The next task
    The next task would seem to be distinguishing those intentional systems that really have beliefs and desires from those we may find it handy to treat as if they had beliefs and desires. But that would be a Sisyphean [i.e. humongous near infinite] labor Dennett’s emphasis p.22 TIS

Note 4.1: The above quote would suggest that Dennett is advancing the opinion that if, indeed, there is a ‘spectrum’ or ‘point’ indicating where ‘true’ or ‘intrinsic’ intentionality falls (if it exists at all), identifying its location is beyond the capabilities of mankind. (see figure 1 below)

Intentional Stance - Commencement Point

Figure 1 – Intentionality Commencement Point

5. Dennett provides a descriptive example: a thermostat evolving into a sophisticated device (animal or person)

DD proposes treating the thermostat as an intentional system, ascribing ”belief-like states” (p.30 TIS) as it undergoes an imaginary transition of increasing sensory and evaluative capabilities with the culminative effect of enriching the semantics of the system, until eventually we reach ‘systems’ for which a unique semantic interpretation is practically (but never in principle) dictated. At that point we say this device (or animal or person) has beliefs about heat and about this very room, and so forth p.31 TIS

Note 5.1: In the phrase that begins, “At that point”, the addition of the clause “has beliefs”, is stating the third-person observance and unadulterated assumption of ‘intentionality’ for the ‘enriched’ evolved device.
Note 5.2: the term, “we say”, is to express the opinion that either, ‘we (people generally) would be likely to’, or ‘I’, Daniel Dennett, ascribe beliefs to an enriched thermostatic device.
Note 5.3: Dennett expresses the opinion that there is an intentionality equivalence between a suitably enriched device and an animal or person – purely, by virtue of sensory and evaluative enrichments.

6. In figure 1 above, where has Dennett positioned the “Belief, Desire, Intentionality slider” on the “Belief & Desire Spectral Scale”?

Dennett states:

    There is no magic moment in the transition from a simple thermostat to a ‘system’ that really has an internal representation of the world around it…. [Dennett’s emphasis]…. The differences are of degree. p.32 TIS

Note 6.1: There is an omission in this quote to any clear reference to intentionality. Instead, there is the implied suggestion that Dennett is of the opinion – contrary to the idea that there is a specific point of complexity at which one might say that intentionality starts (c.f. note 4.1, quotation from p.22) – that there is no point at which ‘as if’ intentionality ends, and ‘actual’ intentionality commences. Thus we have the alternative notion of figure 2 below:

Intentional Stance - Intensity Gradient

Figure 2 – Intentionality Intensity Gradient

However,
Note 6.2: in the phrase, “from a simple thermostat to a ‘system’ that really has an internal representation”, Dennett has borrowed the concept of “internal representation” for the first time. In this context there is a notion of a transition of simple reactivity (in the case of a thermostat ‘reacting’ to temperature changes) to an internal representation of the environment (in reference to the evolved device). Thus Dennett has introduced a notion which is illustrated in figures 3 or 4 below: a simple thermostat ‘reacts’ to changes in temperature and through Dennett’s description of increasing complexity develops ‘internal representation’. Does ‘reaction’ become ‘internal representation’ at a certain point, as of figure 3, or by degree, as of figure 4? To have a creeping greyscale intentionality is one thing, but can you also have a creeping greyscale movement from reactive environmental responses to internal representation? Is it all convincingly explained by a simple linear, smooth progression? Furthermore, does this figurative notion not imply that one can extrapolate to the right on the diagram and propose the evolution of even greater ‘shades’ of internal representation than currently exist within human mental experience?

Intentional Stance - Reaction becoming Internal Representation

Figure 3 – Point at which Reaction becomes an Internal Representation

Intentional Stance - Reaction becoming Internal Representation Gradient

Figure 4 – Reaction becomes Internal Representation Gradient

Additionally,
Note 6.3: in the phrase “transition from a simple thermostat to a ‘system’ ”, Dennett has, once again, borrowed the term “system” (cf. note 1.1 above). The inference is that the term ‘any object’ (as in, “any object – or as I shall say, any system”) is not sufficiently explicit for Dennett and that there is some distinction between ‘object’ and ‘system’. However, in the quote from p.32 (note 6.), the indication is that the object under examination, namely the thermostat, has transitioned from a “simple object” “to a ‘system’ ”. (This is implicit in Dennett’s use of single quotation marks surrounding the term ‘system’). This may sound petty. Nevertheless, it is a point that relates to the issues raised with intentionality and representation that needs pursuing: when can one say of an object that it is a system (cf. figures 5 and 6 below)? Is a thermostat a system or not? Does the intentional stance apply only to systems and not to objects? I shall return to these questions later.

Intentional Stance - Object becomes a System

Figure 5 – At what point does an Object become a System?

Intentional Stance - Systems Complexity Gradient

Figure 6 – Object/Systems Complexity Gradient

Dennett’s general appeal

The reason for making these analyses and for highlighting with figurative illustrations is to help identify any conceptual ‘leaps of faith’ that might indicate false claims, or to identify the misappropriation of concepts by a ‘creeping augmentation’ or ‘creeping deviation’ of meaning or definition:

Running throughout Dennett’s writing is the idea that artefact complexity – in terms of sensory, evaluative and environmentally representative facilities – is sufficient a parallel to make with the evolved complexity of organisms: Dennett is very keen to draw analogy between the potential in the appropriate design of artefacts and the evolution of organisms. The purpose of this analogy is to promulgate the thesis that there is nothing special about humankind and there is nothing mysteriously unique about human intentionality – intentionality can be made to grow artificially. Dennett’s appeal rests on the generalised acceptance in the validity of the undefined ‘concept’ of complexity in ‘systems’. This concept of the complexity of systems is further illustrated in this transcription from a podcast interview about the ‘Chinese Room Argument’ between Nigel Warburton and Daniel Dennett for ‘PhilosophyBites.com’ (reproduced with kind permission from Nigel Warburton).

09:14
Dennett
Imagine the capital letter “D”. Now turn it 90 degrees counter clockwise. Now perch that on top of a capital letter “J”. What kind of weather does that remind you of?
Nigel Warburton:
The weather today – raining.
Dennett:
That’s right; it’s an umbrella. Now notice that the way you did that is by forming a mental image. You know that, coz you are actually manipulating these mental images. Now, that would be a perfectly legitimate question to ask – in the Chinese Room scenario – and if Searle (in the back-room) actually followed the program, without his knowing it, the program would be going through those exercises of imagination: it would be manipulating mental images. He would be none the wiser coz he’s down there in the CPU opening and closing registers, so he would be completely clueless about the actual structure of the system that was doing the work.
Now, everybody in computer science, with few exceptions, they understand this because they understand how computers work, and they realise that the understanding isn’t in the CPU, it’s in the system, it’s in the software; that’s where all the competence, all the understanding lies, and Searle has told us that, that reply to his argument – the systems reply – he can’t even take it seriously; which just shows: if he can’t take that seriously, then he just doesn’t understand about computers at all.
Nigel Warburton:
But isn’t Searle’s point not that, such a computer can be competent because the one with him as components of the system is competent, but that it wouldn’t genuinely understand.
Dennett:
Well it passes all behavioural tests for understanding: it forms images; it uses those images to generate novel replies; it does everything in its system that a human being does in his or her mind. Why isn’t that understanding?
11:10
philosophy bites interview with Daniel Dennett on the Chinese Room Argument transcription excerpt

7. Complexity versus organisation
    There is a familiar way of alluding to this tight relationship that can exist between the organization of a system and its environment: you say that the organism continuously mirrors the environment, ‘or that there is a representation of the environment in – or implicit in – the ‘organization’ of the system. p.31 TIS

Note 7.1: Perhaps Dennett’s sentence “there is a representation of the environment in – or implicit in – the ‘organization’ of the system.” gives a vital clue: perhaps the crucial issue is not concerning ‘degrees of complexity’ but in the very nature of the organisation itself. With figures 5 and 6 above, there is a notion that degrees of complexity in systems is relevant to questions concerning intentionality. However, from atoms to nebulae, or from bacteria to adult human, the list of potential objects and systems that could be classified as complex is endless and this ‘complexity’ appears to have minimal bearing on degrees of intentionality. Superficially, therefore, it would appear that it is the nature of systems organisation rather than of systems complexity that is relevant to intentionality and levels of representation.

8. More thoughts about organisation
    Thus (1) the blind trial and error of Darwinian selection creates (2) organisms whose blind trial and error behavior is subjected to selection by reinforcement, creating (3) “learned” behaviors that generate a profusion of (4) learning opportunities from which (5) the most telling can be “blindly” but reliably selected, creating (6) a better-focused capacity to generate (7) further candidates for not-so-blind “consideration,” and (8) the eventual selection or choice or decision of a course of action “based on” those considerations. Eventually, the overpowering “illusion” is created that the system is actually responding directly to meanings. It becomes a more and more reliable mimic of the perfect semantic engine (the entity that hears Reason’s voice directly), because it was designed to be capable of improving itself in this regard; it was designed to be indefinitely self-redesigning. p.30 TIS

Note 8.1: Dennett indicates that there are various identifiable levels of organisation from organisms that replicate, to plants and animals that display only innate behaviours, to animals that are capable of learning, and to humans that can explore thoughts creatively. Additionally, Dennett expresses the view that humans have some organisational capacity that grants uniquely ‘privileged access’: For if I am right, there are really two sorts of phenomena being confusedly alluded to by folk-psychological talk about beliefs: the verbally inflected but only problematically and derivatively contentful states of language-users (“opinions”) and the deeper states of what one might call animal belief (frogs and dogs have beliefs, but no opinions). p.233 TIS

Does this suggest that systems organisation takes discrete evolutionary steps? If so, how might this idea impact on our interpretation of ‘degrees’ of intentionality and representational content?

9. How does the organisation of systems relate to representation, intentionality and to object?
    It is obvious, let us grant, that computers somehow represent things by using internal representations of those things.p.215 TIS

For an artefact to have an internal representation of the environment is for it to possess information – in the most general terms – about the environment. Both artefacts and organisms must possess some form of ‘information-construct’ to enable them to respond with accuracy to environment conditions. But what is the difference in the construction and organisation of information between the multitude of evolved organisms, and between organisms and artifacts?

A computer can be designed to ‘mirror’ the environment – theoretically to an infinitely accurate and responsive degree. In doing so, a computer must possess an impressive sensory and evaluative information capability, and organise and structure the information in very complex ways. From the third-person perspective, such an computer is clearly going to be capable of appearing as if it possesses an internal representation of the environment and appearing as if it possesses intentionality. Is this third-person ‘evidence’ sufficient to insist that there is no additional defining first-person narrative indicating a distinctive, either intrinsic, or genuinely derived intentionality?

A robot’s aggregated information can reflect the environment and enable suitably complex responses. When it goes through the process of evaluating a favourable environmental condition, it will respond to the favourable environmental condition as if it felt its goodness, but would this apparent goodness merely demonstrate the third-person consequence of a functional mechanism – a mechanism designed to identify favourable conditions and respond appropriately as per design function? What is the ‘reward’ that a computer ‘experiences‘ from which it might learn, by means of actually feeling what is good or bad about the environment? Dennett is of the opinion that, there are Artificial Intelligence (AI) programs today that model in considerable depth the organisational structure of systems that must plan “communicative” interactions with other systems, based on their “knowledge” about what they themselves “know” and “don’t know,” what their interlocutor system “knows” and “doesn’t know,” and so forth. p.39 ER
When can one say that information becomes knowledge? And what ‘knowledge’ – as opposed to information – can a complex aggregate of parts possess? Can an extremely complex array of falling dominos be called a ‘system’ – a system that possesses ‘knowledge’ about what and why the first domino fell – or is the series of complex activities merely causally functional operations? A domino’s purpose for the implementation of function, is to fall; but can a series of dominos that fall be called a true systems-construct that possesses knowledge about the environment to which its individual dominos might react? (see also, Ned Block, 1978 – ‘Troubles with functionalism‘)

10. What is a true systems-construct?

Note from Dennett’s early statement of intent that he makes the assumption that there is a distinction between ‘a system’ and ‘an object’ but provides no explanation of what such distinction entails, other than mere ‘complexity’:

    I will argue that any object – or as I shall say, any system – whose behavior is well predicted by this strategy is in the fullest sense of the word a believer. p.15 TIS

When can an object, however complex, be called a system? Who is to say that a super (duper) computer or extremely advanced robot, is not merely a complex object, such that the nature of its functional organisation remains less than primitive despite its functional acuity and reactive – even adaptive – sophistication?

A true system is a construct of dynamic constituent parts. It does not organise information by virtue of its function. Rather, the very dynamic of its parts is the information construct in and of itself, whereby data is not ‘read’ from the environment to construct information but rather the construct is, by virtue of interactive engagements, intrinsically an information construct. Instead of the components containing, representing or organising information sets, the entire dynamism of the construct itself dictates the nature of its response to environmental conditions and it is this that qualifies the nature of its informed representative character.

Information, which an object might be said to possess about the environment and which might be said to ‘represent the environment’, leads to intentionality of purpose if, and only if, the informedness is represented through the dynamics that determine the nature of its construction. Thus, it is the interaction of the dynamic parts of a system that determine the nature of its informedness and henceforth behaviour. A true systems-construct is itself the representation of the environment by virtue of the interaction of its dynamic components: any representation is implicit in the nature of its dynamic construct.

Of the baby and the bath water

To the left, we have the baby. To the right we have the bath water.

On the left, is the baby:
I like Dennett’s intuitive call, insisting that we relate the simplest living organisms with sophisticated animals like humans i.e., that we assume there is an evolutionary progression that binds the simplest of replicating organisms with ‘complex’ humans; there is an explanation to be had.

Dennett hates the idea that mankind should be so “arrogant” as to proclaim that there is a dividing line; a demarcation point where one can say, “this human has intentionality whilst that animal does not or, heaven forbid, this human has intentionality whilst that mentally retarded human does not.” This evolutionary connection is the baby I don’t want to throw out i.e., I like Dennett’s position that says, the privileged access that humans possess (he says, ‘appear’ to possess) through their exceptional mental capabilities (he says, through their ‘apparent’ intentionality), is not “magical”. I agree, there is a connection to be discovered – there is no magic.

On the right, we have the bath water:
Dennett is very keen to articulate the view that an evolutionary analogy of very simple objects and/or artefacts to very complex objects and/or artefacts have a direct correlative relation or a parallel relation to the evolution of simple to sophisticated organisms.

The flaw in Dennett’s parallel analogy rests on this one sentence of his:

“…any object – or as I shall say, any system…”

The error is in the following false proclamations:
A. object = system
or
object + complexity = system

From these alternative proclamations Dennett makes the false conclusion:
‘objects or systems (who cares which) have ‘as if’ or ‘actual’ intentionality (who cares which) i.e., they can be treated as analogous.
This is extended further by Dennett as follows:
‘there is no true ‘as if’ or ‘actual’ intentionality distinction.’

Alternatively, I say the following about systems and objects:

A (true) system (regardless of its simplicity or complexity) has ‘intrinsic’ purpose. A system’s dynamically interactive components form a stable “body” (or, to borrow Dennett’s terminology, “object”) and, through the interactions that are formative in its dynamic construction, determine the body’s functional characteristics and behavioural properties.

An object that is not a true system, on the other hand, may either consist of a ‘mere’ aggregation of component parts or may not be a product itself of dynamic interaction. Parts can be said to be an aggregate when they do not interact dynamically in such a way as to determine and define the object’s behavioural properties or the object’s stability. An aggregation of parts can make a representation about the environment, will react to the environment and will display behavioural characteristics, but the dynamics of the parts, through their interaction, are not determinate of the stability of the whole.

Therefore, throwing out the bath water allows us to express the following:

    a) a thermostat, a computer, a robot, a tube of toothpaste, a machine are examples of aggregated constructs. They are not systems-constructs.
    b) an aggregated construct cannot have ‘intrinsic’ purpose or ‘intrinsic’ intentionality.
    c) a body (or an object if you take Dennett’s terminology) has ‘intrinsic’ purpose if the very dynamic interaction of its component parts determine its behavioural and functional properties.
    d) computers regardless of their theoretical complexity, designed as they are today, have and would have zero intrinsic intentionality regardless of the third-person’s perspective of their apparent ‘as if’ intentionality and adaptive imitative capabilities.
    e) all systems regardless of simplicity have intrinsic intentionality, be they an atom or human consciousness.

How can the aggregated information sets of computers be transformed, by intrinsic intentionality, into knowledge, into desire, into internalised purpose? To answer this question, is to understand the nature of the hierarchy that arises through the emergence and evolution of self-organised types of systems-constructs. And so we can move forward, from the simplicity of our linear figured illustrations above, to a hierarchy of types of systems-constructs that evolve different types of form whose information about the environment varies widely thereby creating internal representations particular to their type, from the humble atom all the way up the hierarchy to human consciousness. c.f. figure 7 below:

Hierarchical quantitative emergent steps with analogue evolutionary developmentsFigure 7 – Hierarchical quantitative emergent steps with analogue evolutionary developments


References

Dennett (1971) Intentional Systems. The Journal of Philosophy, Vol 68:4 pp. 87-106
Dennett (1981) True Believers: The Intentional Strategy and Why It Works. In A.F. Heath, ed., Scientific Explanation. Papers Based on Herbert Spencer Lectures Given at Oxford University. Oxford: Oxford University Press pp. 150-167 (also in The Intentional Stance i.e. Chapter 2)
Dennett (1987) The Intentional Stance. Cambridge, MA: MIT Press
Dennett (2009) Intentional Systems Theory. The Oxford Handbook of Philosophy of Mind
Dennett (1984) Elbow Room: The Varieties of Free Will Worth Wanting. Cambridge, MA: MIT Press
Warburton (2013) Daniel Dennett on the Chinese Room. Philosophy Bites podcast interview

33 comments to Intentional Stance: Dennett’s 1 vital error is Searle’s 1 critical omission

  • David Turnbull

    hello Mark, You saying, at the end, in fig 7, that the current state of human evolution, is “the ability to interpret causal mechanisms behind qualitative experiences”.

    This statement itself is (for me at least) in need of some interpretation. I shall assume the following sentence to be true. The act of any sort of interpretation is an act in search of understanding.

    As I read it, your analysis of the evolutionary ascent to being human seems to be giving some privilege to the act of interpreting causal mechanisms, as if these are what we are trying to (or need to) understand when we have qualitative experiences.

    I agree this is often the case. If someone is irritated with me, or conversely, happy to see me, it sometimes helps to understand the cause(s) of the response. At other times, the focus is not on the response as such or its causation. It is on what the other person is thinking. Asking why they think X, may not be the same as asking, what caused them to think X.

    In short, does your account reduce the having of reasons to a mechanism which causes thoughts to occur?

    If your answer is “yes” it appears that you may be committed to the belief that a third party suitably equipped with information about the causal state of your brain may know at least as much about what you think, and why you think it, as you do.

    This position (and I’m simply asking here for clarification, as to whether you subscribe to it) has the somewhat bizarre result that a third person may know in advance of my actually thinking X, that I am thinking X. (If you analyse causation, you will surely see why this may be a problem.) Having a full account of causation is predictive. Thinking, as such, whilst we engaged in it, may lead to a wide variety of possible thoughts, conclusions, insights, and so forth, none of which we are in a position to predict beforehand.

    I’d be interested to know what you think

    David

  • David:
    I don’t think that my sentence, “the ability to interpret causal mechanisms behind qualitative experiences” is very satisfactory – thanks for highlighting this:

    Many animals have the ability to learn through observation of causes and their effects. Their cognitive mechanism is capable of determining the qualitative value of experience, and by association, they are then able to learn which experiences ’cause’ those particularly good or bad feelings to happen.
    The distinction with the human rests in the ability to define principles of causation that underlie general mechanism. In effect, determining ‘principles of causation’ [in preference to “interpretation of causal mechanisms”] entails generating concepts ‘about’ qualitative experiences.

    In your example, if someone irritates you, the concept you have already constructed concerns the type of phenomenon that caused this feeling in you. This type of phenomenon is conceptualised, and therefore spoken by humans, as being an ‘emotion’. The concept of emotion is what humans most commonly assign to the principles governing phenomena of feeling – in this case the emotion is that of ‘irritation’. Retrospectively, an individual may then choose to analyse this type of ‘emotional’ response. The analysis might be considerably complex and varied. For example, it might entail looking at the why, how and when, to enable the construction of concepts about future potential responses to similar situations – these analyses become the subject of moral conceptual stances.

    The key relation is the ‘principles’ that we as individuals identify to explain in general terms our own qualitative experience and how these experiences relate to the world.

    An algorithm may enable a robot to catch a ball (the algorithm interprets the causes of gravity on the trajectory of the ball etc) but the robot does not do the catching because it feels good about it: the principles of relation are, to the phenomena of experiencing the qualitative value of the catching experience. i.e. you can’t have a top down sterile predictive capability. The hierarchy must be present.
    Consequently, the third person that you speak of must have privileged access to your particular bio-chemical, experiential, conceptual constructs to know in advance what you will actually think. Knowing the causal mechanism is not enough.

    Hope that answers the questions adequately…
    Thanks

  • Scott Smith

    Just noticed this thread’s title. Perhaps you might find of interest my chapters on Searle & Dennett in my “Naturalism and Our Knowledge of Reality” (Ashgate, 2012). Dennett seems to realize the importance of essences if there is to be real, intrinsic intentionality (which he denies exists). But he doesn’t seem to realize that by denying such things, he (& any philosophical naturalist, I think) is left (at best) only with “takings” (interpretations, conceptualizations), and (at worst) no knowledge of reality. But, we do know many things, & so it seems that intrinsic intentionality is needed, & must be real.

    Searle is a fascinating case; his account of collective intentionality is quite good, I think. But he too seems left with just different ways we talk about (or conceive of) matter (ie, brute reality). But since this is done always from under an ‘aspect,’ he too is left without any direct access to reality, & thus he seems to leave us without a way to get started in knowing reality. But there are many things we do know, & thus it cannot be on the basis he gives us.

  • David Turnbull

    Thanks to Mark for clarifying his position. Mark you seem now to wish to designate the human as not having an underlying interest in causal mechanisms as such, but rather, “the ‘principles’ that we as individuals identify to explain in general terms our own qualitative experience and how these experiences relate to the world”.

    So let’s look at this. Principles are not causal mechanisms. Principles are general concepts. In science, such general concepts are supposed to have explanatory power. In ethics too, this is often the case. For example, if I’m asked why I helped a person in a wheelchair whose wheel was stuck in a rut, I might reply according to an ethical principle. However, unless you (as a third party) had had lots of prior discussions with me about ethics, you would only be guessing as to whether I would opt for a deontological, a consequentialist, or a virtue-based principle. Even then you might be wrong, for on this particular occasion perhaps I didn’t think about it at all. I might have just said I felt sorry for the person. But if you then inferred that the basic principle of ethics I resort to, if asked to give a complete account was “feeling sorry” you would be wrong.

    The third party perspective is at a decided disadvantage. The complexity of the reasoning process involving principles is so staggering that I’m not sure even first person accounts are adequate.

    The question I’m posing is what is the approach you are using attempting to achieve? Is this an attempt to reconstruct a watertight materialist version of reality?

    I see Scott (above) responding from an essentialist position. Now, essences are not material entities, as far as I’m aware. They are idealisations whereby an entity can be understood as the sort of thing (being) it is.

    Let me know if I’m wrong about this. We philosophically engaged people always seem to have to traverse the territory mapped out by a dispute between materialism and idealism, and my question to all who are reading this thread, is, am I right in interpreting this dialogue as our being potentially involved in that sort of dispute?

  • Scott says: “Dennett seems to realize the importance of essences if there is to be real, intrinsic intentionality”
    I would contest the point, that ‘essence’ is required to explain intrinsic intentionality. I would go so far as to say that essence is unrelated or irrelevant to intrinsic intentionality – which is why Scott’s view, “denying such things, he (& any philosophical naturalist, I think) is left (at best) only with “takings” (interpretations, conceptualizations), and (at worst) no knowledge of reality”, does not necessarily follow:

    Once you have a material explanation for intrinsic intentionality, the connection of intrinsic intentionality to notions of essence are broken apart. This is why the title of my article includes the phrase ” Searle’s 1 omission”.
    All true systems-constructs have the same intrinsic intentionality, which is to maintain dynamic stability. Current computers and their software are not true systems-constructs – regardless of their complexity – so they do not possess any intentionality (this is ” Dennett’s 1 error” – as illustrated by figure 7 in my article).

    David: “what is the approach you are using attempting to achieve?”
    Explaining the unity underlying great complexity is a hallmark of reductive explanations which are beneficial for their explanatory, extrapolative, and predictive power.
    e.g. The processes that generate principles are dictated by the intrinsic nature of systems-constructs. Importantly therefore, ‘reason’ is not the bottom line in the acquisition and maintenance of principles. Rather the overriding consideration (at this level) for an individual’s mentality is the need for ‘stable’ conceptual constructs. Truth and reason are merely consequential or accidental by-products – a point well worth noting.

    David: “Is [the approach] an attempt to reconstruct a watertight materialist version of reality?”
    The reconstruction is intended watertight, only as an explanation of why the first-person perspective comes to exist and of what the characteristics underlying the first-person perspective must be; which includes characteristics that we humans associate with and define as properties of consciousness and awareness. However, a gap remains: the explanation does not exclude either a materialist or dualist stance. We would still want to know why any given perspective that is our own, is ours rather than someone else’s – why are we the particular first-person we are in the 13.7 billion year history of the universe rather than someone else? The explanation of the first-person does not provide this answer.

  • Scott Smith

    “Once you have a material explanation for intrinsic intentionality, the connection of intrinsic intentionality to notions of essence are broken apart.”

    And that is the big issue – IF we can have such an explanation. Just to be brief, that’s what I have tried to address with all the major naturalistic options on offer in my book. And I argue that they all fail. (But I realize that here, I am not making an argument for that; I am just pointing to where I have made such an argument, in case anyone is interested in following up on that line of thought.)

  • I apologise if I sounded abrasive or dismissive. Of course, I find highly plausible that your analyses of all the major naturalistic options show they fail. I have no issue with that possibility. But I was also making the point that a full and complete explanation of the first-person perspective does not commit (or condemn) one to either materialism or to dualism – why? Because the problems are not entwined as most often is assumed.

  • David Turnbull

    Interesting… Mark, can you explain your shifts between first and third person perspectives?

    On the one hand you write (both speaking from and referring to first person): “Rather the overriding consideration (at this level) for an individual’s mentality is the need for ‘stable’ conceptual constructs. Truth and reason are merely consequential or accidental by-products – a point well worth noting. ”

    On the other hand (possibly, but maybe not, referring to an actual third person): “The reconstruction is intended watertight, only as an explanation of why the first-person perspective comes to exist and of what the characteristics underlying the first-person perspective must be; which includes characteristics that we humans associate with and define as properties of consciousness and awareness.”

    I wonder whether you have achieved an actual third person perspective. You are using intentionality all throughout, and you do not say how an explanation of the first person perspective, is logically possible. You make this as an assumption (that such an explanation exists), and assumptions are always part of any claim of truth and reason.

    In short, I wonder whether you can ever get outside (or below) the very schema of intentionality you are trying to explain?

  • David: “can you explain your shifts between first and third person perspectives?”

    One can speak of forces in nature that result in particular behaviours to ‘third parties’. e.g. a ball (third party) bounces – because of gravity’s influence and the chemical properties of the constituents of the ball. Of a human one can say that external conditions lead to certain behaviours – behaviours that we might observe of the third-person which can be associated with those types of conditions. This is the calling of the behaviourist:
    A person cries when hit. They cry because hitting causes crying.
    The next question then becomes “but why? why is there crying?”
    Answer: because ‘such an such’ emotion arises (because of that jelly stuff in the skull – enter stage the cognitive scientist)
    The next question is “but why? why is there such and such emotions?” etc etc
    with further third-party analysis there is no ultimate ‘question and answer’ that can reveal the first-person answer we seek either from the behaviourist or cognitivist.

    So, how do we get a first-person explanation?
    One key point in the article is that one needs a definitive systems definition. One cannot just call anything one wishes, ‘a system’.
    This definitive definition tells us that the dynamics of a true system will always ‘seek to maintain stability’. Thus anything that is a true system possesses an intrinsic purpose to sustain stability. This is its intrinsic intentionality. ?From this, we have a unified first-person account of the intention of all true systems regardless of their complexity or of the nature of their construction. ?
    The next task is to relate this to mentality:

    1. Evolution of systems-constructs:
    Ultimately, a system’s stability will always be compromised by environmental interaction.
    This sometimes leads to the destruction, but can also coincidentally lead to the evolution of systems forms.
    Evolution of form tends to lead to increasingly complex or sophisticated systems-constructs.
    2. Emergence of new types of constructs:
    Eventually increasing complexity of systems-constructs leads to the emergence of novel types of systems-constructs.
    These new emergent systems-constructs evolve as of point 1. above (as do all systems) and the cycle continues.
    Thus we end up with a hierarchy of types of systems-constructs.

    The Hierarchical Systems Theory of consciousness explains that mental properties and characteristics are governed by this unified principle of hierarchical systems-constructs. It is a reductive account of the evolution and emergence of those properties and characteristics that humans associate with consciousness.
    As it is underpinned by an explanation of the intrinsic intentionality of all such systems, it determines the first-person explanation we seek.

    Thus, armed with this first-person explanation above in its fullest interpretation – unlike the third-person insights we might acquire from behaviourists and cognitivists and which are the kind Jackson’s Mary becomes very knowledgeable about – Mary will know in advance that a certain ‘red’ of objects causes a first-person phenomenal experience with certain characteristics that she could describe without ever having experienced. Not having experienced red, she would nonetheless be able to empathise with us about what the experience would be like. When she then sees red for the first time, she would not be surprised in the slightest by the experience.

  • David Turnbull

    hello Mark, thanks for this. I still need more clarification. You speak of definitive definitions.

    I’ll try offering some of these. A third person perspective is someone claiming knowledge about another person, without the direct knowledge of the other person’s intentions.??A behaviourist is someone who assumes (taking a third person perspective) that an adequate source of information about brain chemistry is sufficient as a means of explaining why another person does what they do. A cognitivist (taking an alternative third person perspective) is someone who assumes that brain chemistry plus antecedent cognitive information about the person is sufficient to supply this explanation.

    An interpretivist is someone who rejects both of the above. Rather, the interpretivist relies on the first person perspective of the other, in order to find the explanation for why the other person does what they do.

    The question I wish to ask of you is this. Are you (a) a behaviourist (b) a cognitivist, (c) an interpretivist, or (d) something else?

    If (d) can you explain this please?

    David

  • David:
    d) – Why do you need the label?
    As soon as one says, I am a ‘….ist’, people say, “Ahhh. One of them!” and they jump to presume certain things which the individual invariably is not – or at least not entirely what they truly represent or are saying.

    I say definitive definition of ‘systems’ – because systems science is awash with multiple definitions to suit the applications i.e. the definitions are meaningless. I think it is an area that philosophy needs to look into more critically, but that is something else entirely.

    Thank you for all the challenging questions. I appreciate the time and effort you have put in. It all helps me formulate ideas and think of ways of communicating them differently.

  • David Turnbull

    hello Mark

    Thanks for clarifying that you are (d), and not (a,b, or c). May I ask: When you wrote “you” in the question “Why do you need a label” did you (Mark) mean me (David) or was it a case of ambiguous reference?

    I ask this because, whilst it might be true to say of some other person that they “jump to presume certain things which the individual invariably is not”, I (David) would want to say it is not true of me (or at least not always), and certainly not true in this situation.

    This reluctance on your part to be labelled, might be part of a desire to escape from the intentional systems of others. If so I can sympathise with this. It took me thirty or so years after I started doing philosophy to actually refer to myself as a philosopher. Eventually I had to do this so as to help other people understand that I am not an essentially religious person, that I am not an ideologue, or a demagogue, or a moralist… (and so forth).

    As I understand it, I am working within a first person intentional system (my own). And you are working within yours. As you propose it, the essential characteristic of such a system is that it seeks to maintain stability. I do have a question about the word “stability”. Being dead is an example of stability, in the sense that it is a state that resists change; in particular, dead people seem not to have the characteristic of asking questions, expressing desires, and so forth. Yet the stability of a dead person is not the sort of stability that living people ordinarily seek. In fact they rather seek to avoid such sorts of stability. Indeed, there seems to be a preference among living people, whereby if they have a choice between the stable state of being dead, or being in a highly unstable state of being alive, even a condition of chaos and disorder, involving high anxiety and stress, they will still choose the latter over the former.

    The question then, is whether the stability of a system is something that a system seeks to sustain for its own sake, or whether it seeks to sustain it for another value. As I’ve suggested above, the first person intentional systems that I know about, seem to value living, even in a highly disorganised or chaotic way, over being dead.

    I’m just posing this question to you, in the ongoing quest that I seem now to have acquired (albeit only recently) to understand what you are attempting to get your readers to agree with.

    David

  • The question “Why do you need a label?”, had been intended for you – I wasn’t sure why you were asking.

    The stability is in reference to the ‘construct’ of the system; the construct being defined by the dynamics of the constituent parts. When a system dies, its construct disintegrates or, more accurately, ceases to display the consistent functional behaviours that are typical of a systems-construct of its type. The disintegrated parts may form an ‘equilibrium’, which is stable – but the construct that ‘defines’ the system, its function, and its behaviours is not stable, but is incoherent or absent.

    “The question then, is whether the stability of a system is something that a system seeks to sustain for its own sake, or whether it seeks to sustain it for another value.”
    The only consideration of a system is its own stability. That being said, humans, with their conceptual type of knowledge, are able to consider value – beyond the qualitative value of experience itself. A human can develop ideological concepts about the value of community etc, and apply those considerations in its behavioural responses to circumstance. However, ideologies of this nature are part of the stability of an individual’s conceptual construct (concerning the nature of their interpretation of reality – in this wider sense) – the construct is still complying with the requirements of a true system to maintain its stability. An individual therefore, might choose to fight on behalf of the ideologies he subscribes to, because they are foundational to conceptual stability concerning the individual’s interpretation of reality. This explains why a human may choose, counter-intuitively, to die to uphold the ideology that defines their personal identity’s relation with reality. That identity is seen, or ‘interpreted’ by the individual, to define their ‘self’.

    That answer might not have been to the question you were asking…

  • Does anyone think – on reading my article – that I have a hard time taking seriously Dennett’s resolute refusal to be an essentialist?
    Is this how the article reads to you?

  • David Turnbull

    hello Mark

    The question I’m asking relates to the shifts between first and third person perspectives in your account of the model you are working with or in. I don’t claim to be working in such a model however I’m willing to talk through with you certain philosophical issues it seems to raise.

    The example of death is a case in point. From a third person perspective it is as you say a condition of the loss of any previous states of the system itself. It is the extinction of the system. However from a first person perspective death may be regarded, as certain poets have done, as “blissful” or “easeful”, thus making it appear as the regaining of stability rather than the loss of it.

    There is clearly a disjunction between first and third person accounts, and this is an example. The question I wish to pose is how do you justify your shifts between first and third person accounts.

    To give my question a critical edge, I’ll ask the question this way. Is your own interpretation of the model, not dependent on your own first person choice (and thus privileging) of certain third person modes of representing various modalities of human experience?

    David

  • David Turnbull

    The other part of my question relates to the notion of stability itself. I don’t know whether stability is something we always desire. In fact sometimes stability seems less attractive than chaos, and certainly stability sometimes seems like death, and any sort of turmoil or trouble is preferable to an endless monotony. Why else do so many young people (and old people as well) “buck the system”?

  • David Turnbull

    In answer to your question about your response to Dennett’s “resolute refusal to be an essentialist”, which would be an actual intentional stance of his, I am convinced on re-reading your article, that it is clear that you do ascribe actual intentions to DD, and that do not merely treat DD “as if” he has intentions. This sentence from your article indicates that: “It is not obvious why DD chooses to be less explicit in TIS.” You believe that DD really did choose this, even if you are not clear why.

    Given that you are working in an actual intentional framework (as I am) the question still applies. How do you justify a shift from a first person intentional account to a third person “systems” account?

  • Essentialist query:
    Yes, I understand now. Thanks for clarifying this.
    But I am not actually saying one way or other or trying to draw conclusions about what DD is or is not. It doesn’t bother me.
    To explain, I am merely highlighting – and in doing so perhaps being a little excessive – that in The Intentional Stance DD, says “treating… as a rational agent” (point 2. in article – red type), whilst in his earlier version of 1971 he says, “treating as if it were a rational agent” (point 2.1 in article – green type). So DD has decided to change the wording in a subtle way… why? Perhaps it is a typo. I am merely pointing it out without drawing any conclusion about what it means.

  • David Turnbull

    Ah, Mark, I too get the picture more clearly. Thank you. I had inferred (wrongly) you were wanting to jump on the bandwagon with DD.

    The question I posed, then, can be posed to anyone who wants to make materialism into an ultimate explanation of consciousness. My problem with that comes down to this. If materialism is the answer, (or we assume it is) then anything that appears to us as an intentional act is actually something else: the behaviour of a material object. (Replacing “object” with “system” is merely a verbal ploy.) Therefore we are only treating each other “as if” we are human agents with intentions. What is actually going on is something no one ever intended.

    DD’s version of materialism seems to be a way of allowing us to pull the wool over our eyes. Let’s pretend we are intentional. Actually we aren’t.

    However, if we come at this without making a materialist assumption, we are able to treat each other, quite fully, as intentional (with actual wishes, desires, and so forth). The question then is, what justification do we have for treating one another as material objects? I cannot see that there is such a justification, certainly not an ethical one, so I’m happy to go on with the tradition that treats intentionality as part of a complex human agent that defies reduction to material explanations.

    David

  • David, I feel a bit snowed under by your latest 3 or 4 posts. I am not sure what the essence of your queries is at present.

    Hierarchical Systems Theory is a materialist reductive explanation of consciousness, in that it explains why the first-person exists and has the characteristics (the experience and evocation of consciousness and resulting behaviours) that it possesses. It distinguishes from DD in its treatment of complexities. DD says all complexities can be treated as systems whereas I argue this is a mistake. Computer systems regardless of their complexity, are not systems. True systems have a directive or purpose in the way they are motivated to behave by virtue of their dynamic construction – one might wish to call this; intrinsic intentionality. Human mental states evolved and emerged as a hierarchical systems layer.

    ??HST explains why there is the experience of the first-person perspective and what that perspective entails and consists of. However, it does not explain ‘individual’ first-person perspectives i.e. yours or mine. It just explains that a hierarchy of systems has to evolve mental material bodies that possess a first-person outlook and that these first-persons must have self-reflectivity about this perspective.

    In answer to your question about individuals who appear to seek instability: ?I do write about this extensively at – http://mind-phronesis.co.uk/theory-of-moral-philosophy
    There is a hierarchy of systems-constructs that create an individual’s mentality. The stability of one hierarchical level may cause conflict in the stability of another – there is conflict when an individual chooses to act. Recently, I deleted on my website a section on mental dysfunction and instability with the thought that I should do a post dedicated to the subject. The other thing to consider is that stability at each hierarchical level is constantly being challenged by environmental conditions and interaction. Consequently, mental states never achieve ‘absolute’ stability but are in constant flux and the concepts we adhere to may not be truly justifiable in an absolute sense, but merely the concepts that maintain stability sufficiently for function to be maintained – or not in the case of mental dysfunction.

    The other thing David, I am not convinced anyone else is reading this exchange. I am happy to communicate by email if you wish to explore ideas relating to HST further. It might be more straightforward to do so by email.

  • David Turnbull

    hello Mark, thanks for the invitation for further dialogue by email. I do know of at least one other person who is reading this exchange. There are people who read, and don’t write, because they are in the process of sorting out what they think, and they do this by thinking through the arguments presented by others. Many people don’t know what they actually think about these topics, and who would blame them for that (!) and so they would like nothing better than to see a free and open exchange of dialogue.

    Now, back to the topic at hand. We agree there is intentionality and we are intentional, first person agents, which means we decide what to do, based on what we think (or at least we may do this if we try hard enough). An excellent example of this sort of intentionality is given in Hannah Arendt’s The Human Condition, where she announces her own project as “To think what we are doing”. So then she goes back into intellectual history and critically investigates the sources of her own thinking (and that of many others in the western intellectual tradition). It’s all very helpful stuff, because we can see the continual creation and modification of ideas that inform human action.

    So now we come to various forms of systems theory, which purport to explain to us how our consciousness evolved. It seems to me that this is a different sort of process of thinking, and I for one would be inclined to say it isn’t doing philosophy. It seems to be some kind of science, except it isn’t really science, because, as Karl Popper would say, it isn’t falsifiable. There are no possible empirical disconfirmations. Rather, we are stuck with having to believe that our highest forms of consciousness, involving reason, are merely tools to maintain the stability of a particular kind of system.

    Now let’s suppose I argue that, in fact, we humans are quite the opposite; that we are very “good” at destroying all kinds of systems, from ecological to economic. Just why we collectively engage in destroying the very basis for our continued existence is something of a puzzle, and I’m personally not convinced that systems theory has the answer to it. However, a true believer in systems theory will merely say we are destroying one sort of system (the planet) in our quest to establish a different sort of system with its own intrinsic intentionality and stability. The argument between us could go on forever, and the true believer in systems theory would always find a way to argue that what evolution amounts to is the establishment of ever more complex sorts of systems.

    OK, I acknowledge this is all too quick and most probably does not to justice to what you would like to say. If I wasn’t impressed by your analytical approach I wouldn’t bother engaging in this dialogue. I do not wish to belittle you. What I would merely ask for is a philosophical justification for your version of systems theory.

    David

  • 1. Is it science or philosophy? This was the first question that I asked Peter Strawson many years ago after he read an early draft. He said, “I would say it is philosophy – because of its broad scope.” But I would say that science has a way of turning philosophy on its head, on occasion. I wish that philosophy would try to do the same to science a little bit more.
    2. David: “It seems to be some kind of science, except it isn’t really science, because, as Karl Popper would say, it isn’t falsifiable.” ?Would you say that Newton’s Third Law of Motion isn’t falsifiable?
    3. David: “What I would merely ask for is a philosophical justification for your version of systems theory.” ?Why should it be incumbent on me to provide philosophical justification? Surely, it is incumbent on philosophy to find the limits or boundaries of theory. What does history tell us of philosophical justifications anyway?
    4. David: “The argument between us could go on forever” ?I don’t agree with this statement. HST, as a unified theory, provides coherent solutions to so many puzzles, it correlations with our understandings of evolution (some of which are measurable), it provides a coherent testable template for the creation of artificial consciousness, its unifying principles can be modelled mathematic, it can be tested in clinical psychology… etc. So the argument does not go on forever. The evidence just mounts up – it is a bit like had Darwin written his Origins before having set sail: his argument would have appeared like it would go on forever.

    I do appreciate your questions; particularly when I can answer them. 🙂 ?I am not sure where the sticking points are in order to address them though.

  • David Turnbull

    1. Not sure how to respond here.
    2. It’s quite possible that Newton’s third law is a tautology. It may amount to two logically identical ways of looking at the same phenomenon. HST may also be tautological. I think it probably is.
    3.This seems like an evasive response.
    4. Logically it might go on forever. Practically it stops when someone digs their heels in.

    The sticking points? I need only mention one. The materialist assumption. What justification is there for treating other people on a sliding scale that links them to a machine (plus a bit of complexity)? Treat yourself that way if you like. Please leave me out of it!

    It’s difficult territory. I can’t work with you on the inside of your model as it leaves no room for ethics, no room for grappling with the questions that intentionality makes possible. All that is treated as “ideological”, if I’ve read your emails correctly.

    Thanks for putting your stuff out there Mark. Maybe I’ll return to it if I can find a better approach than what I’ve come up with here.

  • The sticking point: “What justification is there for treating other people on a sliding scale that links them to a machine (plus a bit of complexity)?” I challenge you David, to quote any passage of mine that says or implies this – Figures 1 to 6 are inferred from Dennett’s text. They are supposed to illustrate Dennett’s stance as implausible and/or inconsistent. They are not what I think!! Figure 7 is what I think – So… what does figure 7 say about machines and computers regardless of complexity and/or sophistication?

    I could certainly not be clearer in my writing that ethics and intentionality are not merely ideological. I tell you where there is room for morality – HST shows that there is another evolutionary phase to emerge from human mentality that will be as significant a leap as that from ape to mankind.

    I thank you for your gallant effort to get inside HST. I still am not entirely sure what the sticking point is, and think that I must have said something in my previous post that was inflammatory, but I have learnt a great deal from our exchange. Thank you

  • David Turnbull

    We now have an agreement over a sticking point! Thanks Mark.

    Below is a quote from your paper that comes from section 11 entitled “What is a true systems construct?” You wrote

    “The organisation of information – which an object possesses about the environment and which can be said to ‘represent the environment’ – leads to intentionality of purpose if, and only if, the information about the environment is represented by the dynamics that determine its construct. Thus it is the interaction of the dynamic parts of a system that determine its behaviour. ”

    So when I read this I’m thinking all along you are fundamentally in agreement with DD, as this is what you wrote at the beginning of the article when you referred to not throwing the baby out with the bath water. I had in mind the idea that you are trying to improve DD’s account via a sliding scale to bridge the gap between “as if” and “is” types of intentionality (between appearance and reality).

    This idea (i.e. my interpretation of your work) appears to be confirmed by the quote from section 11, where you refer to an “object” having intentionality of purpose. Here I am assuming that you are talking about yourself and myself as objects.

    So then I have to ask, where does Mark get the justification for treating himself and myself as objects (with “intentions”)? This is the question I’m asking you.

    To me, there is a world of difference between having a dialogue with a person (whose intentions can only be manifested in words and deeds) and an object whose “intentions” (in shudder quotes) can be inferred by some elaborate causal account of the dynamic parts of the system that is, nevertheless, merely the complexity of itself as an object.

  • David,

    First, I have questions for you:
    What is it about intentionality that you consider immutable? What does intentionality give us that you hold dear?

    Secondly, in response to your comment above:

    1. Does this sentence that you reference from section 11 of mine read any better?
    “The organisation of information [CUT] leads to intentionality of purpose if, and only if, the information about the environment is represented by the dynamics that determine its construct. Thus it is the interaction of the dynamic parts of a system that determine behaviour. ”

    ??2. David, you said,
    “So when I read this I’m thinking all along you are fundamentally in agreement with DD,”
    no… not fundamentally in agreement.

    3. you said,
    “as this is what you wrote at the beginning of the article when you referred to not throwing the baby out with the bath water.”
    From my perspective of Hierarchical Systems Theory, what is the baby that I don’t want to throw out with the bath water? See my answer in the comment to follow.

    4. “I had in mind the idea that you are trying to improve DD’s account”
    This is true. But what is crucial… what is crucial!; is identifying Dennett’s single error and the significance it plays on his conclusions and approach to philosophy of mind. How does this error lead Dennett down the path that you and I object to?

    5. “I had in mind the idea that you are trying to improve DD’s account via a sliding scale to bridge the gap between “as if” and “is” types of intentionality (between appearance and reality). ”
    The sliding scales are only an illustrative example, or interpretation of what Dennett is either implying or committed to having to address in his account. They are not what I am choosing to adopt. The illustrated figures 1 to 6 are part of the analysis of Dennett’s position.

    Of the baby and the bath water c.f next comment

  • Of the baby and the bath water

    To the left, we have the baby. To the right we have the bath water.

    On the left, is the baby: I like Dennett’s intuitive call to insist that we relate, somehow, the simplest living organisms with sophisticated animals like humans i.e. there is an evolutionary progression that binds the simplest of replicating organisms with ‘complex’ humans; there is an explanation to be had.
    Dennett hates the idea that mankind should be so “arrogant” as to proclaim that there is a dividing line; a demarcation point where one can say, “this human has intentionality whilst that animal does not or, heaven forbid, this human has intentionality whilst that mentally retarded human does not.” This evolutionary connection is the baby I don’t want to throw out i.e. I like Dennett’s position that says, the privileged access that humans possess (he says, ‘appear’ to possess) through their exceptional mental capabilities (he says, through their ‘apparent’ intentionality), is not “magical”. I agree, there is a connection to be discovered – there is no magic.

    On the right, we have the bath water: Dennett is very keen to articulate the view, that an evolutionary analogy of very simple objects and/or artefacts to very complex objects and/or artefacts have a direct correlative relation or parallel relation to the evolution of simple to sophisticated organisms (i.e. the baby on the left is in the bath water) (cue: the application of The Intentional Stance).

    The flaw in Dennett’s parallel analogy rests on this one sentence of his:
    “…any object – or as I shall say, any system…”

    The error is in the following false proclamations:
    A. object = system
    or
    object + complexity = system

    from these alternative proclamations Dennett makes the false conclusion:
    ‘objects or systems (who cares which) have ‘as if’ or ‘actual’ intentionality (who cares which) i.e. they can be treated as analogous.
    This is extended further by Dennett as follows:
    ‘there is no true ‘as if’ or ‘actual’ intentionality distinction.’

    Alternatively, I say the following:

    A system (regardless of its simplicity or complexity) has ‘intrinsic’ purpose.
    A system’s dynamically interactive components create a stable “body” (or, to borrow Dennett’s terminology, “object”) and determine the body’s functional characteristics and behavioural properties.

    An object, on the other hand, may consist of a ‘mere’ aggregation of component parts. Parts can be said to be an aggregate when they do not interact in such a way as to determine and define the object’s behavioural properties or the object’s stability or function; as an whole identity. An aggregation of parts can make a representation of ‘information’ about the environment; will react to the environment; and will display behavioural characteristics. But the dynamics of the parts, through their interaction, do not determine the whole.

    Therefore, throwing out the bath water allows us to express the following:

    a) a thermostat, a computer, a robot, a tube of toothpaste, a machine are examples of aggregated constructs. They are not systems-constructs.
    b) an aggregated construct cannot have ‘intrinsic’ purpose or ‘intrinsic’ intentionality.
    c) a body (or an object if you take Dennett’s terminology) has ‘intrinsic’ purpose if the very dynamic interaction of its component parts determine its behavioural and functional properties.
    d) computers regardless of their theoretical complexity, designed as they are today, have and would have zero intrinsic intentionality regardless of the third-person’s perspective of their apparent ‘as if’ intentionality and adaptive imitative capabilities.
    e) all systems regardless of simplicity have intrinsic intentionality, be they an atom or human consciousness.

  • David Turnbull

    Thank you Mark. That is beautifully explained. Your position is clear.

    I still don’t understand the idea of intrinsic intentionality. The reason for this is that I have no idea what extrinsic intentionality looks like.

    I think the latter must be an oxymoron. Just look at the prefixes ‘in’ and ‘ex’. One is in and the other is out. If extrinsic intentionality is an oxymoron, intrinsic intentionality must be a tautology, and hence explains nothing.

    Coming back to the sentence we are discussing from section 11, it reads to me as a tautology. It reads as a way of defining terms. The very notion of organisation of information leading to intentionality, suggests intentionality to begin with.

    The problem I have is this. I have not read anything yet so far, in what you have written or in any of the voluminous literature anywhere else that is sufficient to convince me that someone has come up with a complete causal explanation for intentionality. I’m not saying that it is logically impossible. I am just saying I have no idea what it would be.

    Let me put it this way. I accept your model. It is tautologically true. I am none the wiser.

    Also, to address your questions to me. I’m not asserting intentionality is immutable nor do I hold it dear. You have assumed that either (a) I hold an ontological position or (b) I hold an axiological position. All my responses to you come from (c) an epistemological position. This position is simply one of not being beguiled by words. I analyse the logic of the language presented to me by asking questions about it. So far all I can see are tautologies and/or contradictions. Nothing that leads me to say “Ah, now I know something I did not previously know.” (Oh I now have some more trivial information, but that isn’t what I mean).

    I would however be willing to explore (a) and (b) and the possibilities therein, but that’s another topic.

  • Mary Clark

    Mark, that is very well explained. Even I … a blatant latent thinker understood it. In my simplicity. I agree, a system does not have to be complex to have intrinsic intentionality (as described by D and Thee). However, perhaps extrinsic intentionality is what people call Intelligent Design? So if that’s the horror we’re pursuing, I’d like to shout Stop! In other words, in making a computer aren’t we the intelligent designers? The only intentionality then is ours, and the machine is still only a robot at our command, even if we give it zillions of options and combinations to choose from.

    When you say “priviliged access” above, what did you mean? Access to what?

  • David Turnbull

    Mary, I’ll just add my bit here awaiting Mark’s response. I think that intelligent design would be intentionality extrinsic to a system, such as when humans design a computer to simulate some of our own intelligence. I agree, the intentionality would be extrinsic, but it would never be the intentionality of the system itself.

    My view of extrinsic intentionality remains that it is an oxymoron. A system that has its “intentionality” (shudder quotes) derived from another source is always dependent on the other source, and thus cannot fulfil requirements such as autonomy, that we ascribe to humans.

    There is also a difference between intelligent design models and causal models, such as DD’s causal model, or even Mark’s. There is no intelligence coming in from the outside. Intelligence is a product of the system itself. This is why Mark refers to it as “intrinsic” intentionality.

    The problem here is that once again, there is no autonomy. The system is dependent on the interplay of causal factors. The system is heteronomous.

    It is possible to have a heteronomous intelligent system, but it isn’t a true description of ourselves, at least not always. Sometimes we act heteronomously, such as when we are coerced, or seduced.

    I think we have to distinguish between intelligence and intentionality, particularly of the open ended sort of which we are capable. When we act autonomously, we initiate action, and what we initiate is not dependent on the environment into which or from which we act. There is a new beginning. Without this capacity, we are no different from robots.

    I’m aware that here I’m approaching the making of ontological and axiological claims, and I want to get at the epistemological problem lurking in Mark’s model (as I see it) before going down that track.

  • David and Mary,

    ‘Intrinsic’ is on my understanding an addon, I suppose, to make a clear distinction from ‘derived’. Some may argue that derived intentionality is not possible and that intentionality is intentionality, making the prefix ‘intrinsic’ redundant.

    Mary: privileged access
    I mean the capacity to be self-reflective, introspective, access ourselves, recognise and identify with our first-person perspective.

    David “I have not read anything yet so far, in what you have written or in any of the voluminous literature anywhere else that is sufficient to convince me that someone has come up with a complete causal explanation for intentionality.”

    Perhaps you are unaware Daivd: there is a reason why I have chosen to make this article of mine on Dennett’s intentional stance the opening introductory section to my ‘book’. It is merely the introduction.

  • Mary Clark

    David, we seem to be saying the same thing about extrinsic and intrinsic intentionality. In that comment, I was just making a little joke as well about the Intelligent Design crowd. You’ve pointed out a contradiction in that theory, for those of us who believe we can act autonomously. But the ID crowd must believe, or want to believe, that they have no autonomy from the Intelligent Designer (God).

    Mark, thank you for the response to the phrase “priviliged access.” You are right: whether intentioanlity or intrinsic or extrinsic probably matters little, Both control the “object.”

  • David Turnbull

    Well Mark! I’ve enjoyed the exchange. As an intro to a book, it could do with an intro to the intro. A quick overview of DD’s model, as well as your own position, with some clear ways in which you are different from DD, and also from Intelligent Design theories, would help. Also a clear summary of essentialism and how you differ from that. I get the sense you are charting a course through some gaps in the theoretical landscape, and its not clear how you can achieve an overall synthesis, given the vast differences in perspective.

    I’ve come around to thinking that such a project might well be worth it, even if the results were not what you want, so you have some support from me, for what its worth.

    Also, for what its worth, I’m wondering whether there is another version of the difference that Aristotle maintained from Plato to discuss here. In short, even if we had a theory that explains the emergence of intentionality, it is not the sort of theory “we” are looking for. What “we” are looking for are those practical principles and concepts that are suitable for intentional beings, whose other attributes include autonomy, caring for others (community mindedness) and so forth. (“We” being moral philosophers)

    My challenge to you is to show how, in principle, it is possible to derive an adequate account of intentionality from a causal theory. My view is that Hume’s enunciation of the gap between ‘is’ and ‘ought’ would be fatal for such an account. I suspect you are trying to bridge that gap. And even if you could provide that, how would it be relevant?

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

  

  

  

To leave a comment, please prove you are a person by answering this CAPTCHA