Difference between revisions of "Knowledge"

From MgmtWiki
Jump to: navigation, search
(References)
(Context)
Line 6: Line 6:
  
 
*Searle: The '''Chinese room argument''' holds that a program cannot give a computer a "[[mind]]", "[[intentionality|understanding]]" or "[[consciousness]]" regardless of how intelligently or human-like the program may make the computer behave.<ref>John Searle, "Minds, Brains, and Programs", (1980) Behavioral and Brain Sciences </ref>  Searle argues that, without "understanding" (or "[[intentionality]]"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.
 
*Searle: The '''Chinese room argument''' holds that a program cannot give a computer a "[[mind]]", "[[intentionality|understanding]]" or "[[consciousness]]" regardless of how intelligently or human-like the program may make the computer behave.<ref>John Searle, "Minds, Brains, and Programs", (1980) Behavioral and Brain Sciences </ref>  Searle argues that, without "understanding" (or "[[intentionality]]"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.
 
  
 
Most of the discussion consists of attempts to refute it. "The overwhelming majority," notes ''[[Behavioral and Brain Sciences|BBS]]'' editor [[Stevan Harnad]],{{efn|[[Stevan Harnad|Harnad]] edited ''BBS'' during the years which saw the introduction and popularisation of the Chinese Room argument.}} "still think that the Chinese Room Argument is dead wrong."{{sfn|Harnad|2001|p=2}} The sheer volume of the literature that has grown up around it inspired [[Patrick J. Hayes|Pat Hayes]] to comment that the field of [[cognitive science]] ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".<ref>{{harvnb|Harnad|2001|p=1}}; {{harvnb|Cole|2004|p=2}}</ref>
 
Most of the discussion consists of attempts to refute it. "The overwhelming majority," notes ''[[Behavioral and Brain Sciences|BBS]]'' editor [[Stevan Harnad]],{{efn|[[Stevan Harnad|Harnad]] edited ''BBS'' during the years which saw the introduction and popularisation of the Chinese Room argument.}} "still think that the Chinese Room Argument is dead wrong."{{sfn|Harnad|2001|p=2}} The sheer volume of the literature that has grown up around it inspired [[Patrick J. Hayes|Pat Hayes]] to comment that the field of [[cognitive science]] ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".<ref>{{harvnb|Harnad|2001|p=1}}; {{harvnb|Cole|2004|p=2}}</ref>
  
 
Searle's argument has become "something of a classic in cognitive science," according to Harnad.{{sfn|Harnad|2001|p=2}} [[Varol Akman]] agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".<ref>In [[Varol Akman|Akman]]'s [http://www.google.com/search?client=safari&rls=en&q=cogprints.org/539/0/md2.ps&ie=UTF-8&oe=UTF-8 review of ''Mind Design II'']</ref> This all boils down to a silly veralism, an argument about the meaning of words, which is why this wiki seeks to define the words that it depends on. In this wiki words mean exactly what we want them to mean, no more and no less.
 
Searle's argument has become "something of a classic in cognitive science," according to Harnad.{{sfn|Harnad|2001|p=2}} [[Varol Akman]] agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".<ref>In [[Varol Akman|Akman]]'s [http://www.google.com/search?client=safari&rls=en&q=cogprints.org/539/0/md2.ps&ie=UTF-8&oe=UTF-8 review of ''Mind Design II'']</ref> This all boils down to a silly veralism, an argument about the meaning of words, which is why this wiki seeks to define the words that it depends on. In this wiki words mean exactly what we want them to mean, no more and no less.
 
 
 
===Strong AI===
 
[[John Searle|Searle]] identified a [[philosophical position]] he calls "strong AI":
 
<blockquote>The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.{{efn|name="Strong AI"|This version is from Searle's ''Mind, Language and Society''{{sfn|Searle|1999|p={{Page needed|date=February 2012}}}} and is also quoted in [[Daniel Dennett]]'s ''[[Consciousness Explained]]''.{{sfn|Dennett|1991|p=435}} Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."{{sfn|Searle|1980|p=1}} Strong AI is defined similarly by [[Stuart J. Russell|Stuart Russell]] and [[Peter Norvig]]: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."{{sfn|Russell|Norvig|2003|p=947}}}}</blockquote>
 
The definition hinges on the distinction between ''simulating'' a mind and ''actually having'' a mind. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind."{{sfn|Searle|2009|p=1}}
 
 
The position is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founder [[Herbert A. Simon]] declared that "there are now in the world machines that think, that learn and create"<ref>Quoted in {{Harvnb|Russell|Norvig|2003|p=21}}.</ref>{{efn|Simon, together with [[Allen Newell]] and [[Cliff Shaw]], had just completed the first "AI" program, the [[Logic Theorist]].}} and claimed that they had "solved the venerable [[mind–body problem]], explaining how a system composed of matter can have the properties of [[mind]]."<ref>Quoted in {{Harvnb|Crevier|1993|p=46}} and {{Harvnb|Russell|Norvig|2003|p=17}}.</ref> [[John Haugeland]] wrote that "AI wants only the genuine article: ''machines with minds'', in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, ''computers ourselves''."{{sfn|Haugeland|1985|p=2|ps=(Italics his).}}
 
 
Searle also ascribes the following positions to advocates of strong AI:
 
* AI systems can be used to explain the mind;{{efn|name="Strong AI in Searle (1980)"}}
 
* The study of the brain is irrelevant to the study of the mind;{{efn|Searle believes that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter." {{sfn|Searle|1980|p=13}} He writes elsewhere, "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works." {{sfn|Searle|1980|p=8}} This position owes its phrasing to Stevan Harnad.{{sfn|Harnad|2001}}}} and
 
* The [[Turing test]] is adequate for establishing the existence of mental states.{{efn|"One of the points at issue," writes Searle, "is the adequacy of the Turing test."{{sfn|Searle|1980|p=6}}}}
 
 
===Strong AI as computationalism or functionalism===
 
In more recent presentations of the Chinese room argument, Searle has identified "strong AI" as "computer [[functionalism (philosophy of mind)|functionalism]]" (a term he attributes to [[Daniel Dennett]]).{{sfn|Searle|1992|p=44}}{{sfn|Searle|2004|p=45}} Functionalism is a position in modern [[philosophy of mind]] that holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accurately represent functional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism.
 
 
[[Stevan Harnad]] argues that Searle's depictions of strong AI can be reformulated as "recognizable tenets of ''computationalism'', a position (unlike "strong AI") that is actually held by many thinkers, and hence one worth refuting."{{sfn|Harnad|2001|loc=p. 3 (Italics his)}} [[Computationalism]]{{efn|[[Computationalism]] is associated with [[Jerry Fodor]] and [[Hilary Putnam]],{{sfn|Horst|2005|p=1}} and is held by [[Allen Newell]],{{sfn|Harnad|2001}} [[Zenon Pylyshyn]]{{sfn|Harnad|2001}} and [[Steven Pinker]],{{sfn|Pinker|1997}} among others.}} is the position in the philosophy of mind which argues that the [[mind]] can be accurately described as an [[information processing|information-processing]] system.
 
 
Each of the following, according to Harnad, is a "tenet" of computationalism:{{sfn|Harnad|2001|pp=3–5}}
 
* Mental states are computational states (which is why computers can have mental states and help to explain the mind);
 
* Computational states are [[multiple realizability|implementation-independent]] — in other words, it is the software that determines the computational state, not the hardware (which is why the brain, being hardware, is irrelevant); and that
 
* Since implementation is unimportant, the only empirical data that matters is how the system functions; hence the [[Turing test]] is definitive.
 
  
 
===Consciousness===
 
===Consciousness===
Line 70: Line 45:
  
 
To Searle, as a philosopher investigating in the nature of [[philosophy of mind|mind]] and [[consciousness]], these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can [[behaviourism|behave]] or [[functionalism (philosophy)|function]] as a conscious mind would.
 
To Searle, as a philosopher investigating in the nature of [[philosophy of mind|mind]] and [[consciousness]], these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can [[behaviourism|behave]] or [[functionalism (philosophy)|function]] as a conscious mind would.
 
===Symbol processing===
 
{{Main article|Physical symbol system}}
 
The Chinese room (and all modern computers) manipulate physical objects in order to carry out calculations and do simulations. AI researchers [[Allen Newell]] and [[Herbert A. Simon]] called this kind of machine a [[physical symbol system]]. It is also equivalent to the [[formal system]]s used in the field of [[mathematical logic]].
 
 
Searle emphasizes the fact that this kind of symbol manipulation is [[syntax|syntactic]] (borrowing a term from the study of [[grammar]]). The computer manipulates the symbols using a form of [[syntax|syntax rules]], without any knowledge of the symbol's [[semantics]] (that is, their [[Meaning (semiotics)|meaning]]).
 
  
 
Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, [[artificial general intelligence]]. They framed this as a philosophical position, the [[physical symbol system|physical symbol system hypothesis]]: "A physical symbol system has the [[sufficient|necessary and sufficient means]] for general intelligent action."{{sfn|Newell|Simon|1976|p=116}}{{sfn|Russell|Norvig|2003|p=18}} The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.
 
Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, [[artificial general intelligence]]. They framed this as a philosophical position, the [[physical symbol system|physical symbol system hypothesis]]: "A physical symbol system has the [[sufficient|necessary and sufficient means]] for general intelligent action."{{sfn|Newell|Simon|1976|p=116}}{{sfn|Russell|Norvig|2003|p=18}} The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.
 
 
;System reply:The basic "systems reply" argues that it is the "whole system" that understands Chinese.<ref>{{Harvnb|Searle|1980|pp=5–6}}; {{Harvnb|Cole|2004|pp=6–7}}; {{Harvnb|Hauser|2006|pp=2–3}}; {{Harvnb|Russell|Norvig|2003|p=959}}, {{Harvnb|Dennett|1991|p=439}}; {{Harvnb|Fearn|2007|p=44}}; {{Harvnb|Crevier|1993|p=269}}.</ref>{{efn|This position is held by  [[Ned Block]], [[Jack Copeland]], [[Daniel Dennett]], [[Jerry Fodor]], [[John Haugeland]], [[Ray Kurzweil]], and [[Georges Rey]], among others.{{sfn|Cole|2004|p=6}}}}  While the man understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese. "Here, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part" Searle explains.{{sfn|Searle|1980|p=6}} The fact that man does not understand Chinese is irrelevant, because it is only the system as a whole that matters.
 
 
Searle notes that (in this simple version of the reply) the "system" is nothing more than a collection of ordinary physical objects; it grants the power of understanding and consciousness to "the conjunction of that person and bits of paper"{{sfn|Searle|1980|p=6}} without making any effort to explain how this pile of objects has become a conscious, thinking being. Searle argues that no reasonable person should be satisfied with the reply, unless they are "under the grip of an ideology;"{{sfn|Searle|1980|p=6}} In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing "system", and does not require anything resembling the actual biology of the brain.
 
 
:To clarify the distinction between the simple systems reply given above and virtual mind reply, David Cole notes that two simulations could be running on one system at the same time: one speaking Chinese and one speaking Korean. While there is only one system, there can be multiple "virtual minds," thus the "system" cannot be the "mind".{{sfn|Cole|2004|p=8}}
 
 
Searle responds that such a mind is, at best, a simulation, and writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."{{sfn|Searle|1980|p=12}} Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that 'it isn't ''really'' a calculator', because the physical attributes of the device do not matter."{{sfn|Fearn|2007|p=47}} The question is, is the human mind like the pocket calculator, essentially composed of information? Or is the mind like the rainstorm, something other than a computer, and not realizable in full by a computer simulation? (The issue of simulation is also discussed in the article [[synthetic intelligence]].)
 
 
These replies provide an explanation of exactly who it is that understands Chinese. If there is something ''besides'' the man in the room that can understand Chinese, Searle can't argue that (1) the man doesn't understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.{{efn|David Cole writes "From the intuition that in the CR thought experiment he would not understand Chinese by running a program, Searle infers that there is no understanding created by running a program. Clearly, whether that inference is valid or not turns on a metaphysical question about the identity of persons and minds. If the person understanding is not identical with the room operator, then the inference is unsound."{{sfn|Cole|2004|p=21}}}}
 
 
However, the replies, by themselves, do not prove that strong AI is ''true'', either: they provide no evidence that the system (or the virtual mind) understands Chinese, other than the [[hypothetical]] premise that it passes the [[Turing Test]]. As Searle writes "the systems reply simply begs the question by insisting that system must understand Chinese."{{sfn|Searle|1980|p=6}}
 
  
 
===Robot and semantics replies: finding the meaning===
 
===Robot and semantics replies: finding the meaning===

Revision as of 16:49, 16 August 2018

Full Title or Meme

Facts, information, and skills acquired by a person through experience and education; the theoretical or practical understanding of a subject.

Context

A significant number of philosophers have convinced themselves that there is no way that a computer could ever be said to have human Knowledge of any subject.

  • Searle: The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness" regardless of how intelligently or human-like the program may make the computer behave.[1] Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.

Most of the discussion consists of attempts to refute it. "The overwhelming majority," notes BBS editor Stevan Harnad,Template:Efn "still think that the Chinese Room Argument is dead wrong."{{#invoke:Footnotes|sfn}} The sheer volume of the literature that has grown up around it inspired Pat Hayes to comment that the field of cognitive science ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".[2]

Searle's argument has become "something of a classic in cognitive science," according to Harnad.{{#invoke:Footnotes|sfn}} Varol Akman agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".[3] This all boils down to a silly veralism, an argument about the meaning of words, which is why this wiki seeks to define the words that it depends on. In this wiki words mean exactly what we want them to mean, no more and no less.

Consciousness

Searle's original presentation emphasized "understanding"—that is, mental states with what philosophers call "intentionality"—and did not directly address other closely related ideas such as "consciousness". However, in more recent presentations Searle has included consciousness as the real target of the argument.{{#invoke:Footnotes|sfn}} Template:Quote David Chalmers writes "it is fairly clear that consciousness is at the root of the matter" of the Chinese room.{{#invoke:Footnotes|sfn}}

Colin McGinn argues that the Chinese room provides strong evidence that the hard problem of consciousness is fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency or some clever simulation inhabits the room.{{#invoke:Footnotes|sfn}}

Searle argues that this is only true for an observer outside of the room. The whole point of the thought experiment is to put someone inside the room, where they can directly observe the operations of consciousness. Searle claims that from his vantage point within the room there is nothing he can see that could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that can speak Chinese.

Applied Ethics

File:USS Vincennes (CG-49) Aegis large screen displays.jpg
Sitting in the Combat Information Center aboard a warship - proposed as a real-life analog to the Chinese Room

Patrick Hew used the Chinese Room argument to deduce requirements from military command and control systems if they are to preserve a commander's moral agency. He drew an analogy between a commander in their command center and the person in the Chinese Room, and analyzed it under a reading of Aristotle’s notions of 'compulsory' and 'ignorance'. Information could be 'down converted' from meaning to symbols, and manipulated symbolically, but moral agency could be undermined if there was inadequate 'up conversion' into meaning. Hew cited examples from the USS Vincennes incident.[4]

Strong AI vs. AI research

Searle's arguments are not usually considered an issue for AI research. Stuart Russell and Peter Norvig observe that most AI researchers "don't care about the strong AI hypothesis—as long as the program works, they don't care whether you call it a simulation of intelligence or real intelligence."{{#invoke:Footnotes|sfn}} The primary mission of artificial intelligence research is only to create useful systems that act intelligently, and it does not matter if the intelligence is "merely" a simulation.

Searle does not disagree that AI research can create machines that are capable of highly intelligent behavior. The Chinese room argument leaves open the possibility that a digital machine could be built that acts more intelligently than a person, but does not have a mind or intentionality in the same way that brains do. Indeed, Searle writes that "the Chinese room argument ... assumes complete success on the part of artificial intelligence in simulating human cognition."{{#invoke:Footnotes|sfn}}

Searle's "strong AI" should not be confused with "strong AI" as defined by Ray Kurzweil and other futurists,{{#invoke:Footnotes|sfn}} who use the term to describe machine intelligence that rivals or exceeds human intelligence. Kurzweil is concerned primarily with the amount of intelligence displayed by the machine, whereas Searle's argument sets no limit on this. Searle argues that even a super-intelligent machine would not necessarily have a mind and consciousness.

Turing test

Template:Main article

The Chinese room implements a version of the Turing test.{{#invoke:Footnotes|sfn}} Alan Turing introduced the test in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.

Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote: Template:Quote

To Searle, as a philosopher investigating in the nature of mind and consciousness, these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would.

Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, artificial general intelligence. They framed this as a philosophical position, the physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action."{{#invoke:Footnotes|sfn}}{{#invoke:Footnotes|sfn}} The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.

Robot and semantics replies: finding the meaning

As far as the person in the room is concerned, the symbols are just meaningless "squiggles." But if the Chinese room really "understands" what it is saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize. These replies address Searle's concerns about intentionality, symbol grounding and syntax vs. semantics.

Robot reply
Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "causal connection" between the symbols and things they represent.[5]Template:Efn Hans Moravec comments: "If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."[6]Template:Efn
Searle's reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn't see what comes into the robot's eyes."{{#invoke:Footnotes|sfn}} (See Mary's room for a similar thought experiment.)
Derived meaning
Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed the knowledge base in his file cabinet. The symbols Searle manipulates are already meaningful, they're just not meaningful to him.[7]Template:Efn
Searle says that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, according to Searle, has no understanding of its own.Template:Efn
Commonsense knowledge / contextualist reply
Some have argued that the meanings of the symbols would come from a vast "background" of commonsense knowledge encoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning.{{#invoke:Footnotes|sfn}}Template:Efn
Searle agrees that this background exists, but he does not agree that it can be built into programs. Hubert Dreyfus has also criticized the idea that the "background" can be represented symbolically.{{#invoke:Footnotes|sfn}}

To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."{{#invoke:Footnotes|sfn}}Template:Efn

However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that roboticists can supply.

Speed and complexity: appeals to intuition

The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies. These arguments, if accepted, prevent Searle from claiming that his conclusion is obvious by undermining the intuitions that his certainty requires.

Several critics believe that Searle's argument relies entirely on intuitions. Ned Block writes "Searle's argument depends for its force on intuitions that certain entities do not think."[8] Daniel Dennett describes the Chinese room argument as a misleading "intuition pump"{{#invoke:Footnotes|sfn}} and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the 'obvious' conclusion from it."{{#invoke:Footnotes|sfn}}

Some of the arguments above also function as appeals to intuition, especially those that are intended to make it seem more plausible that the Chinese room contains a mind, which can include the robot, commonsense knowledge, brain simulation and connectionist replies. Several of the replies above also address the specific issue of complexity. The connectionist reply emphasizes that a working artificial intelligence system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge", as Daniel Dennett explains.{{#invoke:Footnotes|sfn}}

Speed and complexity replies
The speed at which human brains process information is (by some estimates) 100 billion operations per second.{{#invoke:Footnotes|sfn}} Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.[9]Template:Efn


Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')"{{#invoke:Footnotes|sfn}}Template:Efn

Other minds reply
This reply points out that Searle's argument is a version of the problem of other minds, applied to machines. There is no way we can determine if other people's subjective experience is the same as our own. We can only study their behavior (i.e., by giving them our own Turing test). Critics of Searle argue that he is holding the Chinese room to a higher standard than we would hold an ordinary person.

Alan Turing anticipated Searle's line of argument (which he called "The Argument from Consciousness") in 1950 and makes the other minds reply.{{#invoke:Footnotes|sfn}} He noted that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks."{{#invoke:Footnotes|sfn}} The Turing test simply extends this "polite convention" to machines. He doesn't intend to solve the problem of other minds (for machines or people) and he do

Problem

As the process of collecting a User's Identifiers, Attributes etc. (which we can call claims) from a Distributed Identity resident across the cloud in unrelated databases and block-chains, the challenge of turning this data into an Authorization decision becomes quite complex. This is a problem that once was decided by the Knowledge of an expert, for example large bank loans needed to be approved by a vice president and large corporate checks need to be signed by the comptroller of the company.

Solution

The working solution can be viewed as an extension of an existing process in ecommerce where the Relying Party collects the claims about the user and the context of the request (which will likely include user behavior and value of the transaction) into a Trust Vector for processing by a Fraud Detection Service. The result will be used to make the Authorization decision, or it might initiate a continued collection of user claims for a retry.

References

See also Emergent behavior
  1. John Searle, "Minds, Brains, and Programs", (1980) Behavioral and Brain Sciences
  2. Template:Harvnb; Template:Harvnb
  3. In Akman's review of Mind Design II
  4. Template:Cite journal
  5. Template:Harvnb; Template:Harvnb; Template:Harvnb; Template:Harvnb.
  6. Quoted in Template:Harvnb
  7. Template:Harvnb; Template:Harvnb.
  8. Quoted in Template:Harvnb.
  9. Template:Harvnb; Template:Harvnb; Template:Harvnb.