In Defence of Basic Facts: Neurobiology and the Social Structure of Intending
Simon Smith
Abstract
This article addresses a fundamental issue in both philosophy of mind and social ontology. It concerns John R. Searle’s claim that the facts underlying human intentionality are neurobiological. Combining materialist reduction with a metaphysical isolationism that locates intentions inside the individual’s head proves as disastrous to the analysis of a social or personal reality as it does to conceptions of human consciousness. Intentions, it is argued, occur, not inside agents, but in the transactions between agents them. Ignoring the fundamentally social structure of intentions disconnects agents from the cooperative activity that constitutes a social world, thereby eliminating the very thing Searle sought to explain.
The neurosciences play a significant role in many, if not all, accounts of consciousness and intentionality these days. This puts the neurosciences dead centre of an important philosophical programme: viz. social ontology. Social ontology itself is important because the world in which we live is, as most personalist thinkers would attest, not merely physical. It is, as John R. Searle argues in Making the Social World (2010), [1] constituted by social institutions. Moreover, Searle suggests that neurosciences are significant because the ‘essence and ontology’ of those social institutions lies in human co-operative behaviour (ix). According to Searle, at the root of human co-operative behaviour is collective intentionality: ‘we-intentionality.’ ‘We-intentionality’ is, in turn, underpinned by individual or ‘I-intentionality’ (47). Being what Searle calls ‘intentionality-relative,’ then, institutional reality is ultimately dependent upon neurobiological facts (17).
This paper challenges this claim for neurobiological dependence, both in its primary assumptions and its specific application to social ontology. Searle’s assertion that ‘all human intentionality exists only in individual human brains’ (44) is, I suggest, mistaken. [2] Undeniably, brains have a part to play, but intentions do not exist in them. They exist, or rather occur in the patterns of activity which agents intend. Considering this, it seems that there are two main problems with Searle’s analysis in Making the Social World. Firstly, locating intentionality in neural operations risks a physical reduction that makes a nonsense of his entire programme. Secondly, and perhaps more seriously, it underpins a philosophically problematic individualism that tends to isolate intending agents from one another, so misconstrues intentionality. In contrast, I suggest that intentions do not originate in ‘the heads of individuals’, as Searle claims (55), but in the transactions between individuals and their environment. Intentions are primarily, that is, essentially interactive. Pressing the point, intentional activity is learned from other agents and conducted in conjunction with them; agents both physical and, more importantly, personal.
Failing to recognise the primitively social or personal nature of intentionality is a serious flaw in Searle’s analysis of ‘I-’ and ‘we-intentionality’. This is unfortunate since his account of the relation between these two forms of intentionality is ‘the essential prerequisite for understanding social ontology’ (26). Crucially, it seems that Searle may simply lack the tools to reconnect individual intentions with the co-operative behaviour characterised by genuinely shared intentions. If he cannot reconnect individual intentions with co-operative behaviour, then the whole notion of social institutions could be fatally undermined.
Following Searle’s example, this discussion will draw on the likes of J. L. Austin and P. F. Strawson. It will, in addition, provide an opportunity to bring one or two names into the current debate that are, perhaps, less well-known than they might be. In particular, I suggest that personalist thinkers such as Austin Farrer and Charles Conti have something vital to offer social ontology that cannot readily be found elsewhere in the philosophical marketplace. Like Austin and Strawson, they too remind us that crucial questions about the physical grounds of intentionality are far from settled, despite what recent discussions of the topic would have us believe. More importantly, however, Farrer and Conti’s conception of persons, of the self, as a fundamentally social reality has wide-reaching moral, political, and metaphysical implications. It is, therefore, an essential component of any attempt to understand the world in which human beings actually live.
2. Equivocations and Experiments
As noted, Searle believes that ‘consciousness and intentionality are caused by and realised in neurobiology’ (25). His seemingly uncritical acceptance of neurobiological evidence is, perhaps, a common error. This is not to deny either the fundamental relation between consciousness and the physical apparatus in which it is expressed or the importance of neurobiology in exploring that relation. Philosophy must, after all, be scientifically enlightened if it is to make any progress. Accept the neurobiological account – that mind, and in this case, specifically, intentionality is to be found in the brain – without question, however, and we risk overlooking an important equivocation. When talking about intentional consciousness, the philosopher and the neurobiologist are not obviously talking about the same thing. According to Searle, when philosophy speaks of intentionality, it speaks of ‘that capacity of mind by which it is directed at, or about, objects and states of affairs in the world’ (25), a capacity which manifests itself in and as a mode of activity. In contrast, as Farrer observes, neurobiology concerns itself with ‘that part of the nervous system, upon the activation of which the occurrence of acts of consciousness is found to be (as it were causally) dependent’ (1960: 24). On the one hand, then, we have the springs of personal action wherein the ‘doingness’ of what we do, as opposed to causal occurrence, can be found. On the other, we have just that causal occurrence. Such different objects of study surely require different methods of identification and explanation.
Those methods, the philosophical and the neuro-biological, will no doubt have features in common. Observation alone, for example, will be of limited use to either. Philosophers might spend all day ‘people-watching’ without finding certain evidence of consciousness, as Descartes knew all too well. Likewise, observing neurological processes while their owner performs some specified task will certainly supply Hume’s constant conjunction, but no reasonable grounds for asserting either logical or semantic connection will be discovered. The most sophisticated MRI scanners can only focus our attention, marking the spot for further investigation with a more or less precisely placed ‘X’; they cannot reveal treasure supposedly buried there.
Observation and causal inference may not help us to discover what philosophers once called ‘the seat of consciousness,’ but controlled interference will. That is the neurobiologist’s route. Having identified the appropriate areas of the brain, the neurobiologist may experiment on them, attempting to recreate the brain owner’s experience of intentionality. This raises a different problem. Interference with brain-processes might stimulate a variety of sensations and experiences. What it cannot do is stimulate the agent’s intending, for intending is something the agent does. Artificially creating the experience of acting purposefully is not the same as acting purposefully. Stimulating the right network of electrochemical processes may replicate with complete accuracy the sensation I have when intentionally lifting my arm. This, however, is no longer something that I have done; it is something that has been done to me. I am not intentionally lifting my arm; I am being made to lift my arm. The two phenomena may be related but they are neither metaphysically nor epistemically the same. Logically speaking, intentions supply the conditions for distinguishing between what I do and what simply happens. They supply, as Searle is evidently aware, the ‘directedness or aboutness of mental states’ and, indeed, of all our activities (26). ‘Directedness’ concerns what I do, it cannot be done to me or for me, for then the intention actualised is not mine but the neurobiologist’s. We could, of course, operate similarly on the neurobiologist’s brain while she operates on mine, but this would only actualise a second neurobiologist’s intentions and so on ad infinitum. No one in this chain of experiments – except, possibly, the neurobiologist standing at its termination – would be able to distinguish between what they were doing and what was being done to them. Our object, then, would no longer be the ‘directedness’ of mental and physical activities, but rather the sensations accompanying certain mental and physical events resulting from the actions of the neurobiologist standing behind us. Under the circumstances, it seems we have two options: either abandon the search for intentionality or claim that the neurobiological enquirer is not, in fact, part of the system of events, activities, and intentions being explored. The one who wields the knife, that is, must somehow be ontologically different from the subject who lies under it.
3. Unintended Reductions
We cannot deny that much may be learned from interfering with physical processes. One thing, however, will not be learned, as Farrer reminds us: ‘the normal capacity of the...[processes] to deliver the goods’ (1960: 25). Indeed, we may not even be able to discover what sort of ‘goods’ those interfered with processes are supposed to deliver.
Philosophical descriptions of intentionality should be informed by neurobiological research, but they cannot simply or necessarily rely on them. Our descriptions must also be able to account for the phenomenological and psychological aspects, the ‘doingness,’ of intentionality. Leave that out and we face materialist reduction. The neurobiologist has access to sequences of electrochemical processes corresponding to bodily sensations and motions; he or she has the tools to monitor and, to some degree, control them. The ‘doingness’ of intentionality, however, appears to be accessible to the agent alone. What evidence can show that this experience is an experience of what the agent claims it to be? Why, indeed, bother to talk about intentionality – or consciousness for that matter – when all we have on our hands (as it were) is a physical system?
As problematic as this may prove to be for an account of intentionality, it is more damaging still to the social ontologist. The institutional reality with which Searle is concerned is a product of the ‘status functions’ human beings assign to objects and people. Status functions, Searle explains, are functions, the performance of which ‘requires that there be a collectively recognised status that the person or object has, and it is only in virtue of that status that the person or object can perform the function in question’ (7). That certain pieces of paper function as money and certain people (allegedly) function as presidents and prime ministers is a consequence of the status collectively assigned to them. Crucially, however, status functions have little, if anything, to do with physical systems. Just as the people and objects designated ‘cannot perform the functions solely in virtue of their physical structure’ (7), neither can physical structures alone assign the status that function requires. Otherwise put, status functions are products of logical and semantic systems; they are governed by the meaning of the symbols in which the status is expressed: words, images, uniforms, etc. Physical systems, quâ physical, are not capable of meaningfully using of symbols and so cannot assign status functions. (Indeed, physical systems quâ physical are not capable of doing anything, properly speaking; their ‘mutual collisions and mutual exploitations,’ as Farrer dubs them, are governed by causal uniformities, not intentions (1972a: 188).) If no status functions are assigned, then no social institutions arise. It seems the very thing that Searle’s analysis of intentionality hoped to explain has been eliminated.
Materialist reduction faces its own problems, however. Primarily, it must explain the process of enquiry itself, and the discourse of which it is a part, in terms of physical systems. It must, in short, explain the acquisition of knowledge without reference to the intentional agency that undertakes it. Knowledge of physical systems is, as suggested, acquired through controlled interference with them. Although physical systems impact on one other in accordance with causal uniformities, they cannot exercise control. Likewise, intentional agents do not act in accordance with causal uniformities. They deliberately interfere with those uniformities in search of significant, which is to say, useful, information.
Intentionality, moreover, is reflexive. It is (ordinarily) about some feature of the agent’s environment; but it is also necessarily about the agent intending. Personal pronouns could not function otherwise. This is the heart of the physical reductivist’s error. Abandon talk of intentionality (or consciousness) in favour of talk about physical systems and we must explain what the pronoun ‘my’ is doing in sentences such as ‘my neurological experiment.’ This was P. F. Strawson’s challenge to the ‘no ownership doctrine of the self’ (1959: 98). [3] That challenge is, furthermore, one that neurologically preoccupied philosophers have yet to meet. Whether the reduction is behaviourist or biological, the difficulty remains essentially the same: intentions, in order to be intentions, must be the intentions of some particular agent (or agents). They must be owned; and ownership that signifies nothing but the rather ‘dubious sense of being causally dependent on the state of a particular body’ will not do (Strawson, 1959: 96). Reduce the agent to one link in a causal chain and we lose the very thing we came looking for: the (experience of the) agent as the initiator and controller of intentional activities, both physical and mental. Consequently, the reduction of intentional action to physical causality cannot be stated intelligibly, as Strawson showed: denial of ownership ‘is not coherent, in that one who holds it is forced to make use of that sense of possession of which he denies the existence, in presenting his case for the denial’ (1959: 96). In arguing that certain acts or experiences are simply functions of a physical system, the reductivist is forced to make reference to intentions belonging to some agent. He must be prepared to say something like ‘my intending is, in fact, one uniform physical component in a uniform physical process, the operation of which is determined by other uniform physical processes.’ The use of ‘my’ here remains unexplained and, on such an account, unexplainable.
Ultimately, the reductivist allows intending no physical effect, transforming it into an epiphenomenon. This rebuts the exploratory programme from which it arises. Neurobiological experiments do not happen by accident or in accordance with natural uniformities. They are intended activities, something someone meant to do. The idea that intending is merely an epiphenomenon, however, ‘counters the whole assumption of logical study, by denying that meaning governs the formation of discourse’ (Farrer, 1960: 79). The construction of intelligible discourse – such as a description of the electrochemical processes of the brain – is no secondary quality produced by the exercise of a physical system. If it were, we would have no notion of conscious interaction. Pressing the point, the meaning of reductivist-cum-epiphe-nomenalist claims cannot be incidental to the sounds and symbols in which they are expressed. ‘Anyone who holds that when we think or talk the meaning is a by-product, [Farrer argued] is maintaining a paradox’ (1960: 79). They leave consciousness unable to explain itself or anything else.
4. Brain Processes and Cheese Sandwiches
Searle attempts to avoid the reductivist’s trap by arguing that ‘the higher level phenomena of mind and society are dependent on the lower level phenomena of physics and biology’ (25). Clearly, no reduction is intended. Dependency-relations require a minimum of two terms; reduction defeats this. Furthermore, understanding how this dependency-relation works is, he insists, a ‘basic requirement’ of any credible social ontology (25). Given this, his immediate admission that no one actually knows how ‘consciousness is caused by, and realised in, brain structures’ (26) is, perhaps, unfortunate. For if no one knows how this causal process operates, what makes Searle (or anyone else) so sure that it does? So much for his first ‘condition of adequacy’ (3).
Searle does not address these questions directly, but his analysis does offer some clues. In particular, he claims that the brain’s electrochemical processes have ‘interesting logical properties’ (42).
[N]atural brain processes, at a certain level of description, have logical semantic properties. They have conditions of satisfaction, such as truth conditions, and other logical relations; and these logical properties are as much a part of our natural biology as is the secretion of neurotransmitters into synaptic clefts (42).
This seems like an ideal solution: assign logical properties to biological processes and we avoid materialist reduction while simultaneously throwing a bridge across the notorious mind-body divide. If such claims seem odd, Searle insists, it is merely that ‘we are not used to thinking of natural biological phenomena as intrinsically having logical properties’ (42). In the first place, it is worth noting another equivocation here: to say that something possesses a particular property intrinsically is one thing; to say that it possesses that property ‘at a certain level of description’ is another. It is also worth noting that Searle’s claim regarding the unfamiliarity of such ideas is not entirely true. Logic and semantics are, after all, human constructs, products of conscious, personal activity. The application of such constructs to natural phenomena is hardly novel. Speaking historically, it is evident from anthropological studies as far back as Frazer’s The Golden Bough (1890) that anthropomorphic projection has been central to both magical and religious conceptions of the world since the beginning. Speaking psychologically, on the other hand, Piaget’s studies in the mental development of children demonstrate with equal clarity that these and other similar constructs play a role no less vital in the development of persons (1982).
Nevertheless, Searle’s claims here may also seem odd because they do not really make sense. If biological phenomena intrinsically have logical and semantic properties, then replicating the phenomena in a laboratory ought to replicate the properties. We might, for example, produce electrochemical processes with the logical and semantic properties delineating the intention to make a cheese sandwich. We might do so, moreover, without ever involving the brain that is (allegedly) doing all the intentional work. We could then point to those processes and truthfully say ‘that is the intention to make a cheese sandwich’. But this is surely nonsense, not least because it leaves us with a disembodied intention. More accurately, it leaves us with an orphaned one, since it is supposedly embodied in an electrochemical process. Evidently, however, an intention not actually intended by anyone is not an intention. Furthermore, apart from the sandwich-making act, we could not possibly know that this process was the intention we claimed it to be. However precisely we reproduce the processes observed in agents, those which fail to result in the action (and the sandwich) specified provide no grounds for any such claim.
We cannot reasonably ascribe such properties to a biological process governed by natural, causal uniformities. Logic and semantics are functions of symbol-systems; more properly, they are coefficients of our use of symbol-systems. They are, in other words, linguistic (in the broadest sense) and therefore psychological artefacts. Products of invention and convention, these symbol-systems indicate the ways in which words and other symbols might legitimately be used. Logic and semantics do not correlate with electrochemical processes any more than they correlate with flapping lips or the firing of ink droplets at paper. Furthermore, Searle’s claim that ‘you can have brain processes that are logically inconsistent with other brain processes’ (42) is simply false. There is, as Farrer observes, ‘no not in nature, no physical act... [or biological phenomenon] which consists in negating’ (1960: 41). At the higher levels of conscious activity, there may exist a sort of rejection, as when some actions are enacted rather than others. This, however, is a wholly positive move; one ordinarily called ‘choosing.’ Choosing one word over another is not actualised in the negation of an alternative intention, but in the act of using the word chosen. At the level of brain function, electrochemical processes may cancel each other out, but that too is a positive business. There is no logical negation or contradiction between physical processes. There is only the nullifying of physical effects by other physical effects.
Assimilate causal uniformities to logical processes and we invite the very materialist reduction we hoped to avoid. We deny, in effect, that anything new occurs at the higher levels of conscious, personal activity. If nothing new occurs, and the logical and semantic properties characteristic of conscious activity are prefabricated in the lower levels of electrochemical process, then the supposedly ‘higher level phenomena of mind and society,’ not to mention personality, are not ‘higher’ at all. They are merely reconfigurations of what Searle calls the ‘lower level phenomena of physics and biology’ (25). Since physical and biological evidence is unlikely to bear out claims for logical and semantic properties in electrochemical processes, logic and semantics may be abandoned along with intentionality. Causal uniformities must replace them.
In the end, Farrer is right: ‘no bridge...either mental or physical or neutral, is ever going to join the consciousness-story about us to the physiological story about us’ (1960: 8). Not, at least, while the focus is on the results of neurobiological research. [4] Thus, Searle’s claim that ‘thinking is as natural as digesting’ (43) may be true, but it is misleading. Intentionality evidently is a natural phenomenon, but not in the same sense that the operations of the digestive tract are. Such operations are essentially chemical, functioning in accordance with the causal uniformities governing the physical world quâ physical. Reduce consciousness and intentionality to causal uniformity and, once again, we eliminate the thing we were looking for.
To repeat, the point here is emphatically not to deny the relation between neuroscience and philosophy. It is simply to show that this relation is not, indeed, cannot be, as simple as Searle assumes. This much is clear from the materialist reduction which results.
5. The Action Plant
Searle’s ‘respect for the basic facts of the structure of the universe’ (4) is admirable but too narrow. He does not, for example, consider how we find out about intentionality. How do I know agents – including myself – have intentions? Not by experimenting on brains but by interacting with their owners. My intention to write this paper, to contribute to a discussion on intentionality and social ontology is, I hope, perfectly obvious. In short, the fundamental facts of intentionality are not found in electrochemical processes, but in personal action, in my capacity to actualise those intentions by interacting with others. [5] Such facts are logically and psychologically basic.
Searle is, therefore, wrong to insist that ‘[t]here isn’t any other place for intentionality to be except in human brains’ (44). Intentions are actualised in actions (mental and physical). Indeed, his own analysis almost bears this out. Distinguishing between prior intentions (planning, deciding) and ‘intentions-in-action,’ Searle identifies the latter as ‘a component of the action itself’ (33). Furthermore, he notes, ‘[t]he closest English word to intention-inaction is ‘trying’ (34). Again, one wonders whether we could replicate the electrochemical process that constitutes ‘trying’ in a laboratory. And could we point to our replica, calling it ‘trying to do this or that’? Do so, and we face the obvious question: ‘Who is trying to do this or that?’ The answer must surely be ‘no one’: another instance of orphaned intentions.
The lesson is clear. ‘Intentions-in-actions’ are not actualised in the brain. When I am trying to make your sandwich, the trying does not occur in my head. It occurs in the kitchen, actualised in large-scale bodily activities such as buttering bread and arguing about who ate all the cheese. Locate the intentions in my brain, however, and there is nothing to prevent me from stretching out on the sofa and truthfully claiming that I am still trying to make you a sandwich.
Such claims are unlikely to be believed and rightly so. One needn’t be a philosopher (just hungry) to realise that ‘acts’ minus intentions are simply causal events; likewise, intentions (especially ‘intentions-inactions’) that fail to flower in activity are not intentions. With sandwiches, as with Christmas, the thought may be what counts but only if comes up with the goods. Charles Conti would make the philosophical point plain. Intentionality, he argues, requires a modus operandi: ‘we [do not] “act” without a body, nor “mean” without a mind. Intending depends as much on the means as on the motive’ (1995: 185). [6] That was the lesson of J. L. Austin’s ‘A Plea for Excuses’. We can and do take the ‘machinery of action’ apart, separating intentions, acts, and consequences (141). But the separation is logical or conceptual, not physical or metaphysical. We do it in order to assign and accept responsibility, to own our acts, and compel others to do likewise. That much is clear from the logic of the language of apology. [7]
Put simply, intending is something we do. It is not something that our brains do for us. The brain is a manifold of physical processes, not an agency capable of intending. Further, we do not, properly speaking, do anything with that manifold. The brain is not an operative organ as such; rather, Farrer observes, it is ‘an instrument of organ-control’ (1960: 28). The roots of intentions are deep in the processes of the brain, but intentions themselves extend throughout the body engaging the muscle and bone of lips and limbs. They come into focus – are actualised – at the operative point, where the agent strives to grasp word or world. In short, intentional consciousness is actualised in the talking mouth and the bread-buttering hand, not the electrochemically fizzing brain.
G. E. M. Anscombe and Stuart Hampshire were right to insist on identifying and explaining behaviour as a basic criterion for recognising intentionality. Hampshire, in particular, suggests that, for anything to count as intended, there must always be some possible answer to the question ‘What are you doing?’ (75). [8] Locate intentions solely in the brain, however, and this is no longer true. Evidently, at the moment I write this, brain processes are going on. But they are not what I am doing. Currently, I am trying to express an idea as clearly as I am able. Insist that what I am really doing is firing off electrochemical processes in my brain and moving the muscles in my arms and fingertips, however, and Hampshire is surely entitled to ask me how I am doing it. If I have no idea whatsoever of how I am doing something, then it is difficult to see why I would insist that I am, in fact, doing it. In principle, at least, it must always be possible to give some account of the procedures my activities involve if I am to claim them as mine.
The physical corollary of intention is not the brain process itself, but the large-scale pattern of bodily activity in which the intention is expressed. On the macroscopic level, there are the muscular extensions and contractions of bodily movement. On the microscopic level, Farrer describes ‘an immensely tenuous, elongated plant, rooted in several different regions of the brain, passing its stem through the spinal column, and flowering into performance in the hand’ (1960: 26). Crucially, it is not in the particular brain process that we find agents intending; the ‘whole nerve-plant from brain to hand is the vehicle or instrument of the behaviour’ (26). [9] This applies even when there is no explicit activity. Thinking or, perhaps more pertinently, prior intentions are actions.
More properly, thinking is the ‘shadow of doing’ and so ‘must be interpreted by a full-blooded doing’ (Farrer, 1960: 39). The analogy of bodily activity is the only clue we have; in this case, language-use: ‘[t]he best sort of characterisation of thinking is that it is a sort of talking to ourselves’ (Farrer, 1960 29). Thus, the ‘shadowy’ action-patterns of thought are likely to be those of ordinary acts of talking: the activated ‘nerve-plant’ running from brain to lips, jaw, tongue, vocal chords and so on. [10] To be sure, when we think we (try, at least, to) talk silently to ourselves. The action-pattern is not fully enacted and the nerve-plant fails to flower in that ‘full-blooded doing’. Thinking ‘ghosts’ the act of speaking as it were, stopping short of engaging the vocal apparatus. This much seems clear, not least because, as academics know very well, silent thought so easily and so frequently crosses unnoticed the boundary between talking silently to oneself and doing so out loud.
This alters the course of social ontology. Understanding intentionality, Farrer insists, means recognising that the ‘characteristic act of mind is discourse’ (1964: 63). Discourse, of course, is originally a social act. It is not dependent on biological facts per se, but on social ones: my capacity to recognise and respond to other agents like myself. Indeed, ‘[t]hought is the interiorisation of dialogue. We should not think at all, were we not mutually aware’ (Farrer, 1967: 126). Acts of thought, such as prior intending, do not occur in my brain but in my mind. Like all activities, acts of mind are transacted with and, crucially, learned from, others. [11]
This overcomes the fundamental isolationism of Searle’s analysis. For Searle, that is, human beings are distinct individuals, discrete units of intentionality. ‘The only intentionality that can exist [he argues] is in the heads of individuals. There is no collective intentionality beyond what is in the head of each member of the collective’ (55). How, then, can we make sense of social institutions and the cooperative behaviour that constitutes them? If all intentionality is really ‘I-intentionality,’ then genuinely shared intentions, full participation in those institutions, seems impossible. Searle’s answer lies in the assumptions we make when engaged in group activities. In pursuing group goals, he argues, ‘I am operating on the assumption that you will do your part, and you are operating on the assumption that I will do my part’ (55). There is, then, no collective intentionality, only the ‘operating assumptions’ of individuals in groups. Exactly why Searle thinks this rebuts any reduction of ‘we-intentionality’ to ‘I-intentionality’ is unclear. Under his schema, cooperative action is just individual actions aggregated under the umbrella of shared assumptions. There is no participation in social institutions except on the individual level. Consequently, it would seem that there are no genuinely social institutions as such.
Contra Searle, collective activity sees individual intentional acts as intrinsically interconnected. Individual intentions are shaped by group intentions; they are active responses to the intentional acts of others in the pursuit of shared goals. This is a particular instance of a general truth about human thought and action: every intention demands participation. My intentional activities are, as Farrer put it, actualised in pari materia with other agents (1959: 235). They are primitively exercised and experienced as an ingredient in some interaction event.
Intentional action is controlled interference aimed at bringing about change in the agent’s environment. There is no action in vacuo; the simplest movements require the relatively stable presence of something in relation to which they are movements, minimally the rest of my body. As Farrer points out, however, I am not ‘swimming in a perfectly featureless medium’; there is considerably more to my environment than my own body. I am, in fact, ‘walking the earth among all sorts of obstacles’ (1959: 233). And it is those obstacles that instigate my actions: I act in response to other agencies insofar as they impact upon me, perhaps by impeding my progress or by providing the means to overcome some other impediment. If my intention is to flap my arms and fly to the moon, for example, then a whole universe of physical forces is against me. However, once I understand those forces and the ways in which they operate, I can manipulate them; I can (although, in my own case, admittedly only in principle) use the agencies they govern to build myself a rocket. [12] In this way, those agencies set the boundary conditions for action without, however, determining the limits of intention. Without them, I could not act at all. I could not walk without the ground beneath my feet providing friction or talk without the air I breathe and the other objects around me that reflect the sound. I could not even think; what would I think about? Nothing but the emptiest thoughts about thought itself and that can hardly be classed as thinking at all. Unable to act, neither could I intend, for one cannot intend what one is incapable of doing. Indeed, I would not know myself as an agent at all. [13] In isolation from my environment and its obstacles, what would I be? Not a walker, a talker, nor even a thinker; and almost certainly not a joker, a smoker, or a midnight toker (Curtis, Ertegün, and Miller, 1973). ‘[A]part from my experience of impinging upon, and being impinged upon by, other things or forces, I have no conceivable clue to physical existence, or physical force, or physical interaction’ (Farrer, 1972b: 210).
A merely physical environment would not, of course, birth human consciousness. That takes a social one: ‘[m]entality as we know it is a social product’ (Farrer, 1967 126). To be a person, I must be in a world of persons. The encounters that shape our lives, our identities, our capacities to think and act, are encounters with other personal agents. Our intentions are formulated within the framework of their intentional behaviour. That framework is the cradle of consciousness: it is everything our parents do that promotes our survival, that makes our development as human beings possible, and we are born into it. It is where we learn to identify sounds and objects, to manipulate our environment, and, ultimately, to participate in the most basic and most important of social institutions: the home. [14] Hence, Farrer reminds us: ‘From first infancy our elders loved us, played us, served us and talked us into knowing them’ (1967: 129); and in knowing them, becoming ourselves. Fortunately, that process does not end with childhood (something easily forgotten until we encounter another, strikingly different, culture). Throughout our lives, others teach us to think and act. It is through this process of intelligent imitation that we develop a self and learn to enact it. That much, J. L. Austin reminds us, is also clear from the logic of the language of apology, language primarily affirmative of others, our relations to them, and impacts upon them. [15] Such interchanges are the logically and psychologically basic facts of human existence. Human beings are social creatures first, individuals only second.
Consider the most obvious and typical act of consciousness: communication. In such transactions, consciousness develops; dialogue supplies the tools with which we make of ourselves what we are. ‘Personality is part of that common social and linguistic store we share with others’ (Conti, 1983: 74). Not only ‘share with,’ but also learn from. Others provide us with the intellectual artefacts, the language and learning, from which we construct our thoughts and actions. This is the ‘social lore’ (as Polanyi termed it) from which personality is constructed; it passes from generation to generation ‘by a process of communication which flows from adults to young people’ (1974: 207).
Conversation is, therefore, a prime example of co-operative behaviour. Logically speaking, my talk presupposes your involvement in the discourse, just as yours presupposes mine. But there is much more here than shared presuppositions. My intending is actualised in conjunction with the actualisation of yours: our intentions are necessarily, and intimately, interconnected. That is what it means to be a personal agent: ‘to perform an act so that others recognise the twin of their own intentions’ (Conti, 1983: 74). In acting, that is, we offer ‘a mirror image, showing others to themselves, or vice versa’ (Conti, 1983:745).
In conversation, we may explicitly share intentions. Striving to make our meaning clear, to say what we mean, we enter into what John Macmurray calls ‘reciprocal communication with others;’ sharing our experiences with one another, we ‘constitute and participate in a common experience’ (1961: 60). In doing so, we simultaneously inform and in-form one another’s meaning. Actions express intentions so solicit a response in kind; responses elicit further intentions, participating in their formation. Exploring or elaborating your point, I co-opt your intentions as you co-opt mine. Each response we give provides ingredients for the next. Though many academics will no doubt be surprised to learn it, conversation is as much about listening and understanding as it is about talking. Whether we agree or disagree, we appropriate one another’s intentions along with the ideas and experiences they express. In appropriating, we inevitably modify, interpreting and supplementing them, before returning them to the conversational pot. Moreover, since intelligent discourse tends to involve at least some logical progress, it consists not only in responding to what has been said, but also what is likely to follow. Hence, in being anticipated to some degree, each response is also a key ingredient in those that precede it. Our intentions are in and as the interplay of these contributions; consequently, the meaning we generate – to reach a compromise, perhaps, or simply clarify a question – is genuinely shared. This mutual adjusting and modifying of shared intentions is characteristic of co-operative action in general. It is, furthermore, constitutive of human intentionality and of all the shared experiences and institutions it generates. [16] Ultimately, these are the logically basic and irreducible facts which Searle is yet to fully grasp. Indeed, this may be because they cannot be accommodated by ‘physics, chemistry, by evolutionary biology, and the other natural sciences’ (4). Nevertheless, I suggest, those facts provide a significantly better starting point for understanding human intentionality and the social reality it creates. What is more, they provide significantly better conditions of adequacy to govern that enquiry.
‘A Plea for Excuses’ in Philosophical Papers. Eds. J. O. Urmson & G. J. Warnock. Oxford: Clarendon, 1961. 123-152.
Conti, Charles
‘Austin Farrer and the Analogy of Other Minds’. For God and Clarity (New Essays in Honour of Austin Farrer). Eds. Eaton, Jeffrey C. & Loades, Ann. Pennsylvania: Pickwick, 1983. 51-91.
The Child’s Conception of the World. Translated by Joan and Andrew Tomlinson. St. Albans: Granada, 1982.
Polanyi, M.
Personal Knowledge. Chicago: University of Chicago, 1974.
Searle, J. R.
Making the Social World. New York: OUP, 2010.
Strawson, P. F.
Individuals. London: Methuen, 1959.
Vacariu, G.
‘The Mind-Body Problem Today’. Open Journal of Philosophy. 1.1 (2011): 26-34.
Notes
All quotations from Searle are from here.
Elsewhere, we are told, Searle locates intentions-inactions in the ‘components of actions,’ specifically ‘the limb movements’ which constitute physical action (McDowell, 2013: 4). Despite this, as we shall see, it appears that Searle has not appreciated the significance of that location.
See also Hampshire, 1983: 79.
Farrer was paraphrasing A. J. Ayer. Descartes’ problem remains unsolved; hence, Farrer concedes: ‘If...we know nothing of the link between matter and mind, it is because there is nothing to be known. Anyone who has reflected on what the expression of a mental state or act means, will see that there can be no further link between them, than...that of de facto concurrence’ (1960: 8). For the parallel ‘binding problem’ in neuroscience, see Vacariu 2011: 28.
See Austin 1961: 126 on the dangers of talking about action while forgetting the personal agency involved.
Echoing the sentiments of J. R. Lucas, Conti reminded his students that, when the intention to write an essay fails to flower into activity, we have no grounds for claiming that an essay was ever intended. See also McDowell, 2013: 3.
See Farrer, 1960: 48 and 1967: 114; and Conti, 1995: 187. Cf. Searle’s causal interpretation of intentionality 2010: 133.
Anscombe’s question was ‘why?’ (1976: 9); Hampshire opted for the more basic ‘what?’ (1983: 93). Explaining why one does something presupposes an awareness and understanding of what one is doing. Nota bene: it is not necessary that one must be able to specify the fine detail of one’s actions. As Michael Polanyi points out, knowing how to ride a bicycle does not require detailed knowledge of the distribution of physical forces involved (1974: 49-50).
According to Farrer, the ‘action-plant’ describes ‘An area picked out by the action of consciousness, not by neural action as such’ (1960: 54). The patterns of neurophysical operation are a continuous rhythm: ‘a minute excitation, constantly weaving its natural channels.’ On the microscopic level, these ‘minute excitations’ have no intrinsic unity: ‘Seen from the level of neurophysical functioning, none of these patterns is a single whole.’ On the microscopic level we can only discern elements of a larger pattern. Thus, neurophysical description identifies those smallest components of an action-pattern. ‘Neurophysiology uses a microscopic scale; its unit of time is a moment which allows room for no more than the excitation of this part of this nerve, or that part of that. The whole system of movement is pulled together into one in being wielded by a single act of consciousness’ (1960: 54).
See also Kosslyn’s use of PET (Positron Emissions Tomography) scans to show that the areas of the brain involved in mentally picturing an object and the visual perception of an object are the same in Image and Brain: The Resolution of the Imagery Debate, 1993.
See Polanyi, 1974: 203-214.
Hence, both Farrer and Hampshire would designate touch as the basic sense, because it means access to a world of interacting agencies. See Farrer, 1959: 232 and Hampshire 1983: 48.
See Hampshire 1983: 47-53 and Farrer, 1959: 230-237 on the role of action in self-identification.
For a more detailed discussion of the personal – as opposed to the organic – foundations of consciousness, see Macmurray ch. 2 ‘Mother and Child,’ 1961.
See also Farrer, 1970: 74: ‘We learnt to talk, because they talked to us; and to like, because they smiled at us. Because we could first talk, we can now think; that is, we can talk silently to the images of the absent, or we can pretend to be our own twin, and talk to ourself.’
See Farrer, 1960: 300 and the postscript to the 2nd edition of Hampshire, 1983: 274-296.