Rejecting The Systems Reply to John Searle’s Chinese Room Argument

The ‘systems reply’ to John Searle’s Chinese room thought experiment argues that even though the individual inside the Chinese room does not understand Chinese, he is part of a larger system that does. Searle considers the systems reply and responds to it in Minds, Brains, and Programs, but several philosophers still reject Searle’s conclusions for reasons along the lines of the systems reply. I will begin this essay by defining strong AI and providing an exposition of the Chinese room analogy and its implications for strong AI. I will then briefly characterise the systems reply to the Chinese room analogy and also Searle’s response to the systems reply. Later, I will consider Jack Copeland’s ‘Outdoor version’ of the Chinese room argument, which incorporates Searle’s response to the systems reply, with the intention of showing that it is sound. Finally, I will give reasons to doubt that a ‘system’ of the kind imagined by advocates of the systems reply could ever possess understanding or any other such intentional1 mental states. These arguments yield the conclusion that the systems reply fails to undermine the Chinese room argument.

The Chinese room is directed against the claims of strong AI. Advocates of strong AI claim, for example, that “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.” (Searle, 1982: p353) Weak AI, in contrast, maintains that computers can only act as though they have a mind and that they are only able to imitate understanding and other intentional mental states. Searle’s example of a Schank program, a program designed to simulate the human ability to understand stories by answering questions about them, is used to illustrate the kinds of claims advocates of strong AI make. “Partisans of strong AI claim that in this question and answer sequence the machine is not only simulating a human ability but also (1) that the machine can literally be said to understand the story and provide the answers to the questions and (2) that what the machine and its program do explains the human ability to understand the story and answer questions about it.” (Searle, 1982: p354) These two claims – particularly the latter – may be somewhat overstating the strong AI case. Whether the Schank program really does understand, much less explains the human ability to understand stories, could be debated by a strong AI advocate. What all strong AI advocates would surely agree though is that there is nothing wrong, in principle, with a program that could understand a story as a human does even though they may disagree that the Schank program is such an example. A more moderate statement of the strong AI position, to avoid the argument being directed at a straw man target, is that “mental processes are computational processes over formally defined elements.” (Searle, 1982: p366) What this means is that thoughts and other intentional mental states can be reduced to nothing more than symbols and rules for putting these symbols together. Each thought can thus be represented as a string of symbols and so when a computer manipulates symbols in the appropriate way it too can be said to think. Strong AI emphasises the importance of the kind of symbol manipulation described here (in other words the program or software) in creating a conscious machine over the hardware and so an important feature of such a position is that it does not matter what kind of hardware the ‘thought’ is realised on. As long as the appropriate computation is carried out then whatever is doing the computing can be said to think. Searle is not denying that machines in general can think as after all, he says, “[w]e are precisely such machines”. (Searle, 1982: 368) Nor does the strong AI thesis define artificial intelligence as “whatever artificially produces and explains cognition.” (ibid.) which would make it trivially true. The claim that Searle’s argument is directed against is that something “could think, understand, and so on solely in virtue of being a computer with the right sort of program” and that “instantiating a program, the right program of course, [could] by itself be a sufficient condition of understanding”. (ibid.) Searle wants to argue that as well as the appropriate program consciousness requires the appropriate hardware in order to be realised.

The Chinese room is supposed to refute strong AI by describing a situation in which a formally defined program answers questions, much like the Schank program, in a manner that passes the Turing test2 but where the program cannot be said to understand what it is doing, which is contrary to what strong AI advocates must claim3. The Chinese room analogy is as follows: Imagine an English speaking man, locked in a room, who does not understand a word of Chinese to the point where he is “not even confident that [he] could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles.” (Searle, 1982: p355) What the man does have, however, is a set of instructions (in English) for how to correlate certain symbols and collections of symbols with other sets of symbols. This would be called the ‘program’. If native Chinese speakers were to pass a written story, say, under the door of the Chinese room along with a question about it the man inside could run the program and pass back under the door a response in Chinese. We are to imagine that the man inside is extremely efficient at doing this, so that he takes as long to respond as we might imagine a native speaker to, and also that the program is so well written that the responses are indistinguishable from those of a native Chinese speaker. To those outside the room the man’s understanding of Chinese appears no better than would his understanding of English – yet there is a clear difference in the two cases. In the Chinese case, all the man in the room has are the formal syntactic rules for manipulating symbols. In the English case, however, the mane uses his understanding of the semantics of English language to interpret what the symbols mean and generate his reply. No matter what the formal program consisted in, we imagine, the man inside the room would never understand Chinese as he does English because Chinese for him is lacking any semantic content. Returning to the Schank program, it is clear that such a program would be an example analogous to the man’s understanding of Chinese, because all computers (currently) have to go on are formal symbols and rules. The conclusion Searle draws from these examples is that genuine understanding and other intentional mental processes consists of more than simply carrying out a program which involves only the manipulation of formal symbols.

The systems reply to the Chinese room argument rejects Searle’s conclusion. The systems reply accepts that the individual carrying out the program may not understand Chinese, but maintains that the system as a whole understands Chinese: “the fact is that he is merely a part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has ‘data banks’ of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part.” (Searle, 1982: p358) This is a worry for Searle’s argument as it does not necessarily follow from the fact that the man does not understand Chinese that the system as a whole does not understand Chinese. Jack Copeland illustrates this problem with the following example:

“The flaw in the vanilla version[4] is simple: the argument is not logically valid… The proposition that the formal symbol manipulation carried out by Clerk[5] does not enable Clerk to understand the Chinese story by no means entails the quite different proposition that the formal symbol manipulation carried out by Clerk does not enable the room to understand the Chinese story. One might as well claim that the statement ‘The organization of which Clerk is a part has no taxable assets in Japan’ follows logically from the statement ‘Clerk has no taxable assets in Japan’.”

(Copeland, 2002: p110)

This argument by no means settles the truth of whether the room as a whole understands Chinese, as Copeland points out. However, what this argument does show is that, even if Searle has successfully established that the man inside the room (Clerk) does not understand Chinese, he is not entitled, a priori, to the further claim that the room, or system, does not understand Chinese. This wider claim does not logically follow from Searle’s argument. Despite this, there are good independent reasons to believe that the system taken as a whole does not understand Chinese, and these will be examined later in the essay. For now, however, I wish to draw attention to Copeland’s second formulation of Searle’s argument, which is logically valid.

Searle’s counter reply to the systems reply is to re-imagine the original thought experiment but this time with all the additional features (the ledger and rules, the data banks etc.) internalised into the man inside the room’s head by his remembering them. He performs all the calculations mentally and we can even do away with the room and imagine him interacting with native Chinese speakers by following the formal rules which he has now committed to memory. In this new scenario the system is a part of him yet he still does not understand Chinese. This is supposed to remove the possibility of the systems reply by making the entire system a part of the man inside the room. In his second formulation of Searle’s Chinese room argument Copeland takes this argument into account and uses it as an additional premise to create a logically valid argument:

(2.1) The system is a part of Clerk

(2.2) If Clerk (in general, x) does not understand the Chinese story (in general, does not ɸ), then no part of Clerk (x) understands the Chinese story (ɸs).

(2.3) The formal symbol manipulation carried out by Clerk does not enable Clerk to understand the Chinese story.

2.4) The formal symbol manipulation carried out by Clerk does not enable the system to understand the Chinese story.

(Copeland, 2002: p111)

Unlike the previous argument, this version (the ‘Outdoor Version’) is logically valid. Premise 2.1 is uncontroversial as it is explicitly stated in Searle’s response to the systems reply that the system is internalised, and thus a part of, the man in the room. Whether or not it is actually physically possible for a human to memorise such an extensive list of rules is unimportant to the argument as it is only intended as a thought experiment (the actual possibility of a man being able to perform the appropriate calculations and find the correct sets of symbols in any reasonable amount of time seems equally dubious). As long as there is nothing wrong, in principle, with memorising all the salient features of the program described by Searle then premise 2.1 should not be disputed. Whether or not premise 2.3 should be accepted is not quite as clear however. It may be argued that, no matter how detailed the program may be, there are still certain kinds of questions (e.g. indexicals) that require semantic understanding in order for them to be answered in a way that would convince outsiders the man carrying out the program really does understand Chinese. If this is true then it means that either Clerk’s responses would not be indistinguishable from a native Chinese speaker’s or that Clerk needs at least some semantic, and not purely syntactic, understanding of Chinese to properly convince natives he is fluent in Chinese. Due to essay restrictions however I will not explore this question in depth here. Given the potential infinity of rules the program has available though it seems at least possible that there could be some formal way of providing Clerk with a way of answering such questions. This would mean Clerk would still have zero understanding of Chinese semantics whilst still appearing on the outside to be a fluent speaker.

Perhaps the most controversial premise of the preceding argument is premise 2.2, which Copeland calls the ‘part-of principle’. He disputes the plausibility of Clerk being able to converse fluently in Chinese without on some level having an understanding of Chinese semantics even if Clerk himself would insist he has no understanding at all. “It is all too conceivable”, he says, “that a homunculus or homuncular system in Clerk’s head should be able to understand Chinese without Clerk being able to do so.” (Copeland, 2002: p112) To support this claim he gives the following example of a homuncular system we might expect to find in humans: ‘Conceivably there is a special-purpose module in Clerk’s brain that produces solutions to certain tensor equations, yet Clerk himself may sincerely deny that he can solve tensor equations – does not even know what a tensor equation is, we may suppose. Perhaps it is the functioning of this module that accounts for our ability to catch cricket balls and other moving objects.’ (ibid.) Even if this particular example turned out to be false it is certainly highly plausible that humans would have sub-systems of this kind evolved for various practical purposes. If it turned out that Clerk did in fact have such a sub-system that understood Chinese it would undermine Searle’s argument for the following reasons:

“Of course, one might respond to these and similar examples as follows. Since a part of Clerk is proving a theorem of quantified tense logic (solving a set of tensor equations, etc.) then so is Clerk – there he is doing it, albeit to his own surprise. This response cannot be available to Searle. If Clerk’s sincere denial that he is able to solve tensor equations (or what have you) counts for nothing, then likewise in the case of the Chinese Room. However, it is a cornerstone of Searle’s overall case that Clerk’s sincere report ‘I don’t speak a word of Chinese’… suffices for the truth of premise (2.3)”

(Copeland, 2002: p112)

If a part of the system (a homuncular module, say) did turn out to understand Chinese then it would be sufficient to refute both premises 2.2 and 2.3 of the outdoor argument and would mean the Systems reply would succeed in undermining Searle’s argument by showing that, contrary to what he says, Clerk does in fact understand Chinese. The analogy would thus fail to show the relevant distinction between a human that can genuinely understand a story and a computer that only appears to do so by using formal rules.

Though the possibility of homuncular modules in human brains in general is plausible the possibility of a module that understood a language, independently of Clerk in general being able to understand that language, seems less so. Indeed the possibility of any kind of sub-system understanding anything seems highly doubtful. Returning to the example of tensor equations, it may well be true that the ‘special-purpose’ module produces solutions to tensor equations, but it is a bit of a stretch to further claim that the module understands these solutions as solutions when this happens. A more likely explanation is that these subsystems just carry out their actions, which to some degree may involve computation, blindly, perhaps even algorithmically. To admit that these subsystems understand what they are doing would allow for all kinds of blind computations to be labelled ‘understanding’. As Searle himself says “there is a level of description at which my stomach does information processing, and it instantiates any number of computer programs, but I take it we do not want to say that it has any understanding.” (Searle, 1982: p360) However, if we are to accept that the ‘special-purpose’ module which solves tensor equations understands these tensor equations, then it seems we must allow that the ‘stomach, heart, liver, and so on are all understanding subsystems, since there is no principled way to distinguish the motivation for saying the Chinese subsystem understands from saying that the stomach understands.’ (ibid.) ‘Understanding’, in general, seems to be restricted to conscious mental states. There are some borderline cases, for example when someone says they know a phone number even though unable to consciously recall the digits out loud because of the ability to type the correct number into the phone when they “let [their] fingers do the walking.” (Harnad, 2002: p302) The Chinese Room does not appear to be an example of such a case however. “Finding oneself able to exchange inscrutable letters for a lifetime with a pen-pal in this way would be rather more like sleep-walking, or speaking in tongues… It’s definitely not what we mean by ‘understanding a language’, which surely means conscious understanding.” (ibid.) In spite of this intuition Copeland makes the mistaken assumption that being able to solve tensor equations, say, is equivalent to understanding them which leads him to make the following claim: “Since a part of Clerk is proving a theorem of quantified tense logic… then so is Clerk… This response cannot be available to Searle.” However, in light of the preceding argument, it seems this response is, contrary to Copeland’s claims, available to Searle. Searle can accept that Clerk is solving tensor equations if a part of him is because being able to solve tensor equations does not necessarily mean being able to understand these equations and their solutions. The claim Searle will have to deny, however, is “If a part of Clerk can solve tensor equations then Clerk understands tensor equations.” Given the preceding discussion though this claim seems implausible anyway. Just because Clerk can give the correct Chinese responses it still does not mean that he understands what he is saying, on any level, for the same reason that being able to catch a cricket ball does not mean one understands tensor equations and their solutions.

If one accepts that Clerk does not understand any Chinese (premise 2.3) then, in light of the previous discussion of the Part-Of principle (premise 2.2) and the stipulation that the entire system is internalised by Clerk (premise 2.1), one must also accept the conclusion (2.4) that “The formal symbol manipulation carried out by Clerk does not enable the system to understand the Chinese story.” Thus, the systems reply fails. There are further reasons why the systems reply fails, even in principle, though. To repeat the argument, the systems reply maintains that although Clerk himself may not understand Chinese, the combination of Clerk and all the extra features of the thought experiment (the ledger, rules, scratch paper for calculations and data banks of Chinese symbols etc.) does understand Chinese. This claim raises a number of questions: How does the addition of these features allow for understanding? Are they all necessary and, if not, which ones are the ones needed for the system to understand? Where does the semantic content come from? This last question will be dealt with later but, for now, the first two questions raise concerns for advocates of the systems reply. It seems that to admit the system understands Chinese, but not Clerk, implies implausible claims about the kinds of things that are sufficient to cause understanding and other intentional mental states. Somewhere in his explanation of the Chinese room case the proponent of the systems reply must draw a non-arbitrary line between when the system understands and when it does not but no additional feature or set of features above and beyond Clerk himself appears capable of adding understanding to the system. “If the person alone doesn’t understand Chinese, no amount of adding these kinds of things will turn the resulting conglomeration into something which does so… A person plus some pieces of paper and walls just isn’t the right kind of thing to be a properly basic subject of mental phenomena.” (Preston, 2002: p30) As a result of the Chinese room argument Searle concludes that “only something that has the same causal powers as brains can have intentionality” (Searle, 1982: p369) and that “[w]hatever else intentionality is, it is a biological phenomenon, and it is likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena.” (Searle, 1982: p372) Though this may be overstating the case somewhat it is hard to see how the kinds of things that constitute the system in the Chinese room argument could realistically be said to understand. Perhaps it would be possible for these kinds of things to simulate the relevant brain activity and thus understand Chinese, as in the Brain Simulator reply, but this is a different argument altogether. In the original Chinese Room case it seems that, if the human does not understand, then there is no way the system as a whole could plausibly be said to understand either. The inclusion of the kinds of features described in the systems reply are simply not sufficient to cause intentionality.

A related problem with the systems reply is that it fails to deal with perhaps the most crucial point in the Chinese room analogy: the complete lack of semantic content. Just as one might doubt that the inclusion of the ledger, rules, and so on could be sufficient to constitute intentionality one may also question how the addition of these things could guarantee the system semantic content when it is lacking in the individual. Searle repeatedly emphasises that Clerk understands nothing of Chinese. All he has available are the syntactic rules for how to correlate meaningless (for him) symbols with meaningless symbols. As his understanding of Chinese is zero we cannot even imagine that he could deduce the meaning of other symbols from the ones he does understand as he has absolutely no foundation from which to begin.

“The Systems Reply claims that even though there is no semantic content in me alone, there is semantic content somewhere else – in the whole system of which I am a part, or in some subsystem within me. But the same argument that worked originally – namely, I do not have any understanding of Chinese because I do not know what any of the words mean, I have no way to attach meaning to any of the symbols – works for the whole system. The whole system doesn’t know what any of the words mean either, because it has no way to attach any mental content to any of the symbols.”

(Searle, 2002: p53)

The same reasons that undermine the claim that the system understands Chinese also undermine the plausibility of the system having semantic content but not the individual. If Clerk does not provide the symbols with any interpretation (semantics) then the extra things that constitute the system as a whole are surely not going to either. The rules that instruct Clerk to pass x back when y comes under the door are purely syntactic and scratch paper, walls, pencils etc. are hardly sufficient to imbue dead signs with a meaning just as they cannot be said to understand. These related problems for the systems reply highlight why it must, in principle, fail. It just isn’t possible for the extra inanimate things included in the systems reply to add understanding or meaning to the Chinese room. This is not to say that, in principle, paper, walls, and so on can’t be configured in any way to possess these features6. However, the minimal purpose of these extra items in the Chinese room case as originally conceived by Searle (to record syntactic rules for use by Clerk, to prevent outsiders seeing what is happening inside the room, and so on) is not sufficient to add features which are exclusive to humans and certain animals (or, possibly, things which simulate the relevant features of humans). It is hard to imagine how any amount of syntax (and scratchpaper, walls, and so on) could ever be sufficient to generate semantic content in the Chinese room.

In summary, the systems reply is not a successful strategy for refuting Searle’s Chinese room argument and its conclusions. Although it does not logically follow from the original version of Searle’s thought experiment that the system as a whole does not understand Chinese, by taking into account Searle’s counter to the systems reply and using it as a premise, a logically valid version of the Chinese room argument can be formed. I gave reasons to think each premise of this subsequent argument is true – particularly premise 2.2, the Part-Of Principle. Copeland rejects the Part-Of principle on the grounds that homuncular subsystems are capable of solving problems the individual may not be capable of solving. As Copeland believes this premise to be the weakest of the argument and undermined by this claim he issues the following challenge to Searle: ‘If… the Part-Of principle is said to be a purely contingent claim (i.e. a claim that happens to be true in the actual world but is not true in possible alternatives to the actual world), then Searle’s difficulty is to produce reasons for thinking the principle true.’ (Copeland, 2002: p113) There are, contrary to Copeland’s claims, good reasons to think the Part-Of principle is true however. Copeland’s example of a homuncular system that solves tensor equations is not an example of an understanding subsystem and the possibility of such subsystems, where the understanding is independent of the individual they are a part of, in general seems problematic. If we allow that Copeland’s example subsystem really understands tensor equations then this seems to pave the way for many apparently non-understanding systems to be labelled understanding. ‘The study of the mind’, writes Searle, “starts with such facts as that humans have beliefs [and other intentional mental states such as understanding], while thermostats, telephones, and adding machines don’t. If you get a theory that denies this point you have produced a counterexample to the theory and the theory is false.” (Searle, 1982: p361) If we accept Copeland’s claims then this seems to allow for all information processing to be labelled understanding which provides numerous counterexamples to this most basic of facts in the study of the mind. The Part-Of principle, when combined with the other premises discussed, logically entails the conclusion that the system as a whole does not understand any more than Clerk does. A further reason that the system in general will not understand Chinese if Clerk does not is that the extra features of the system that are lacking in Clerk do not provide adequate resources to provide either semantic content or understanding to the system. Understanding requires, at the very least, a simulation of the relevant features of brains but no such simulation can happen in the Chinese room case as the only purpose of the additional features is to provide syntactic rules for Clerk to follow. These syntactic rules are not sufficient to generate semantic content in a vacuum. Though the systems reply ultimately fails for these reasons this does not necessarily mean that Searle’s conclusions drawn from the Chinese room thought experiment are true. However, what this essay has shown is that Searle’s conclusions are not false as a result of systems reply-related worries.


1 Intentionality: “that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states; undirected forms of anxiety and depression are not.” (Searle, 1982: p358 footnote)

2 Turing Test: A test to determine whether a machine is intelligent. If, in an ordinary text conversation with someone, a computer program’s responses are indistinguishable from those of an actual human being then the computer passes the test.

3 Input is transformed by ‘computational processes over formally defined elements’ into output indistinguishable from a human’s. Thus this example fits the definition of a mental process according to the definition of strong AI given earlier.

4 Vanilla version: The Chinese room argument as originally presented by Searle and as explained earlier in this paper.

5 Clerk: the name Copeland gives to the man in the Chinese room.

6 I have in mind the kinds of counterexamples where inanimate objects are configured in such a way as to correspond to synapses in the brain. E.g. Searle’s water pipes (Searle, 1982: pp.363-4) or Ned Block’s China brain (Block, 1991).


Block, Ned. (1991) Troubles with Functionalism. Reprinted in The Nature of Mind (pp.211-228) edited by David M. Rosenthal. Oxford University Press.

Copeland, B. Jack. (2002) The Chinese Room from a Logical Point of View. Reprinted in Views into the Chinese Room (pp.109-122) edited by John Preston and Mark Bishop. Oxford University Press.

Harnad, Stevan. (2002) Minds, Machines, and Searle 2. In Views into the Chinese Room (pp.294-307) edited by John Preston and Mark Bishop. Oxford University Press.

Preston, John. (2002) Introduction. In Views into the Chinese Room (pp.1-50) edited by John Preston and Mark Bishop. Oxford University Press.

Searle, John R. (1982) Minds, Brains, and Programs. Reprinted in The Mind’s I (pp.353-373) edited by Douglas R. Hofstadter and Daniel C. Dennett. Penguin Books Ltd.

Searle, John R. (2002) Twenty-One Years in the Chinese Room. In Views into the Chinese Room (pp.51-69) edited by John Preston and Mark Bishop. Oxford University Press.


  • There are two issues with this problem, the first being the a-priori assumption that an individual having a conscious understanding of something can only have one determinate Truth value, and secondly, the slow collapse of the assumptions that justified the thought experiment.

    In the Chinese room example, we accept the idea that someone might be able to produce a result he does not understand, because we can associate it with many situations we face in our own life when we participate in bureaucratic systems we do not fully understand. We accept the black box nature of the setup, with its restricted information channels, and it seems plausible to us that an arrangement could be constructed where someone could perform incredibly complex lookup algorithms and formal operations without becoming aware of the semantic content of the symbols he is manipulating.

    But as we remove the forms of control, this becomes increasingly implausible. Already in the original example, we can imagine the man starting to glean patterns from the things he is writing over and over, almost against his will gaining some impression of the structures and patterns of the language he is writing. I’m reminded of the film Arrival, but generally speaking any of us who has actually done some bureaucratic job should be able to remember some moment where the system started making sense to them, when they began to see some underlying logic there.

    The human tendency to find meaning in the world provides a constraint that we willingly suppress when given a highly ordered environment, that may serve to manipulate the clerk as to the context behind his actions.

    But when he is out in the world, having memorised all this language, and performing it with others, it is not plausible that he will remain in total incomprehension of what is occurring; meaning will leak.

    And the next question is from what to what? We claim that we know that the man cannot be conscious of the content of the statements, but this presumes the very thing that is to be inquired about; whether someone can implement consciousness without

    Examples of “subroutines” or “the room” seek to go larger or smaller, to solve this program by locating the two systems in different spatial extents. But why can we not go further, and locate them in exactly the same place? Why do we assume that the human brain and body, that implements one consciousness, can only implement one at a time? It is on this assumption that the argument rests that he is “not conscious” of the meaning of the statements. Of course, we can locate these systems temporally; we can observe that practically speaking implementing an AI within your own mind by memorising lookup tables would be a profoundly slow process, and so we might assume that the chinese speaking AI and the conventional human consciousness operate on different scales. But it should only be a small leap of imagination to consider the possibility of someone having this dual self, given that we have already allowed him the capacity to do extremely unusual levels of mental acrobatics in order to hold a room’s worth of data in his head.

    If consciousness is recognised from within itself, how could one become aware of other consciousnesses occupying the same body? In cases where people’s Corpus Callosum has been severed, the concept of dual consciousnesses in some kind of implicit indirect interaction has been one of the proposals, with the evidence for this kind of dual consciousness existing on exactly the levels that the chinese room relies on; information provided in ways that allow apparent recognition from one side of the body but not the other, a sense that you can talk to one version of a person without talking to the other version.

    Once you recognise this as a possibility, the very idea of claiming that you know that a man does not understand something, is not conscious of it, begins to be subject to an important proviso, “as far as this consciousness I am aware of and communicate with is concerned”.

    If a man can be both aware of phenomena, meanings, and so on, and not, then the contradiction between not being aware of something and being aware of something applies only within the domain of consistency corresponding to a single consciousness, or indeed, a consciousness at a specific time, as over time we can naturally become no longer aware of things we were previously aware of.

    Thus by saying that we know that this man’s most familiar and identifiable consciousness is not aware of something, is insufficient to show that he is generally unaware of it when also implementing a system that is designed towards allowing him to generate a new consciousness, and when both operate within the same brain, with access to the same information, and with introspection of one onto the mental processes of the other in a formal form a constant risk, it is too much to say that those must necessarily remain separate.

    • Thanks so much for your reply.

      I wrote this a long time ago so I’m a little rusty, but my understanding of what you are saying is that ‘understanding’ is not this perfect binary where you either understand something and are fully conscious of it or you don’t understand it at all.

      Your example of severing the corpus callosum shows this: There can be areas of our minds we are not conscious of, but that seemingly possess understanding. And even without such an extreme example, we see something similar happening in our own minds all the time. It seems that a lot of thinking happens ‘in the dark’ so to speak – like when you’re trying to remember a name or phone number or something and it seemingly just bubbles up out of nowhere into conscious thought.

      But I’m still not sure such examples show that the system (i.e. man + instructions for manipulating Chinese symbols) understands Chinese. The parallel, as I see it, is this: The man and the instructions for manipulating symbols represent the conscious mind and the unconscious/subconscious aspects of the mind respectively. The first two things (the man in the room and a conscious mind) are analogous. But my intuition is that there is an important difference between the second two things (i.e. a subconscious mind and an inanimate list of instructions on a piece of paper). However, I can’t say exactly what this difference is.

Leave a Reply