Difference between revisions of "Knowledge"

From MgmtWiki
Jump to: navigation, search
(Context)
(Context)
Line 5: Line 5:
 
A significant number of philosophers have convinced themselves that there is no way that a computer could ever be said to have human [[Knowledge]] of any subject.
 
A significant number of philosophers have convinced themselves that there is no way that a computer could ever be said to have human [[Knowledge]] of any subject.
  
*Searle: The '''Chinese room argument''' holds that a program cannot give a computer a "[[mind]]", "[[intentionality|understanding]]" or "[[consciousness]]" regardless of how intelligently or human-like the program may make the computer behave.<ref>John Searle, "Minds, Brains, and Programs", (1980) Behavioral and Brain Sciences </ref> The centerpiece of the argument is a [[thought experiment]] known as the '''Chinese room'''. Searle argues that, without "understanding" (or "[[intentionality]]"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.
+
*Searle: The '''Chinese room argument''' holds that a program cannot give a computer a "[[mind]]", "[[intentionality|understanding]]" or "[[consciousness]]" regardless of how intelligently or human-like the program may make the computer behave.<ref>John Searle, "Minds, Brains, and Programs", (1980) Behavioral and Brain Sciences </ref> Searle argues that, without "understanding" (or "[[intentionality]]"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.
  
  
 
Most of the discussion consists of attempts to refute it. "The overwhelming majority," notes ''[[Behavioral and Brain Sciences|BBS]]'' editor [[Stevan Harnad]],{{efn|[[Stevan Harnad|Harnad]] edited ''BBS'' during the years which saw the introduction and popularisation of the Chinese Room argument.}} "still think that the Chinese Room Argument is dead wrong."{{sfn|Harnad|2001|p=2}} The sheer volume of the literature that has grown up around it inspired [[Patrick J. Hayes|Pat Hayes]] to comment that the field of [[cognitive science]] ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".<ref>{{harvnb|Harnad|2001|p=1}}; {{harvnb|Cole|2004|p=2}}</ref>
 
Most of the discussion consists of attempts to refute it. "The overwhelming majority," notes ''[[Behavioral and Brain Sciences|BBS]]'' editor [[Stevan Harnad]],{{efn|[[Stevan Harnad|Harnad]] edited ''BBS'' during the years which saw the introduction and popularisation of the Chinese Room argument.}} "still think that the Chinese Room Argument is dead wrong."{{sfn|Harnad|2001|p=2}} The sheer volume of the literature that has grown up around it inspired [[Patrick J. Hayes|Pat Hayes]] to comment that the field of [[cognitive science]] ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".<ref>{{harvnb|Harnad|2001|p=1}}; {{harvnb|Cole|2004|p=2}}</ref>
  
Searle's argument has become "something of a classic in cognitive science," according to Harnad.{{sfn|Harnad|2001|p=2}} [[Varol Akman]] agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".<ref>In [[Varol Akman|Akman]]'s [http://www.google.com/search?client=safari&rls=en&q=cogprints.org/539/0/md2.ps&ie=UTF-8&oe=UTF-8 review of ''Mind Design II'']</ref>
+
Searle's argument has become "something of a classic in cognitive science," according to Harnad.{{sfn|Harnad|2001|p=2}} [[Varol Akman]] agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".<ref>In [[Varol Akman|Akman]]'s [http://www.google.com/search?client=safari&rls=en&q=cogprints.org/539/0/md2.ps&ie=UTF-8&oe=UTF-8 review of ''Mind Design II'']</ref> This all boils down to a silly veralism, an argument about the meaning of words, which is why this wiki seeks to define the words that it depends on. In this wiki words mean exactly what we want them to mean, no more and no less.
 
 
This all boils down to a silly veralism, an argument about the meaning of words, which is why this wiki seeks to define the words that it depends on. In this wiki words mean exactly what we want them to mean, no more and no less.
 
  
 
==Philosophy==
 
==Philosophy==

Revision as of 16:42, 16 August 2018

Full Title or Meme

Facts, information, and skills acquired by a person through experience and education; the theoretical or practical understanding of a subject.

Context

A significant number of philosophers have convinced themselves that there is no way that a computer could ever be said to have human Knowledge of any subject.

  • Searle: The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness" regardless of how intelligently or human-like the program may make the computer behave.[1] Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.


Most of the discussion consists of attempts to refute it. "The overwhelming majority," notes BBS editor Stevan Harnad,Template:Efn "still think that the Chinese Room Argument is dead wrong."{{#invoke:Footnotes|sfn}} The sheer volume of the literature that has grown up around it inspired Pat Hayes to comment that the field of cognitive science ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".[2]

Searle's argument has become "something of a classic in cognitive science," according to Harnad.{{#invoke:Footnotes|sfn}} Varol Akman agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".[3] This all boils down to a silly veralism, an argument about the meaning of words, which is why this wiki seeks to define the words that it depends on. In this wiki words mean exactly what we want them to mean, no more and no less.

Philosophy

Although the Chinese Room argument was originally presented in reaction to the statements of AI researchers, philosophers have come to view it as an important part of the philosophy of mind. It is a challenge to functionalism and the computational theory of mind,Template:Efn and is related to such questions as the mind–body problem, the problem of other minds, the symbol-grounding problem, and the hard problem of consciousness.Template:Efn

Strong AI

Searle identified a philosophical position he calls "strong AI":

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.Template:Efn

The definition hinges on the distinction between simulating a mind and actually having a mind. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind."{{#invoke:Footnotes|sfn}}

The position is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founder Herbert A. Simon declared that "there are now in the world machines that think, that learn and create"[4]Template:Efn and claimed that they had "solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind."[5] John Haugeland wrote that "AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves."{{#invoke:Footnotes|sfn}}

Searle also ascribes the following positions to advocates of strong AI:

Strong AI as computationalism or functionalism

In more recent presentations of the Chinese room argument, Searle has identified "strong AI" as "computer functionalism" (a term he attributes to Daniel Dennett).{{#invoke:Footnotes|sfn}}{{#invoke:Footnotes|sfn}} Functionalism is a position in modern philosophy of mind that holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accurately represent functional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism.

Stevan Harnad argues that Searle's depictions of strong AI can be reformulated as "recognizable tenets of computationalism, a position (unlike "strong AI") that is actually held by many thinkers, and hence one worth refuting."{{#invoke:Footnotes|sfn}} ComputationalismTemplate:Efn is the position in the philosophy of mind which argues that the mind can be accurately described as an information-processing system.

Each of the following, according to Harnad, is a "tenet" of computationalism:{{#invoke:Footnotes|sfn}}

  • Mental states are computational states (which is why computers can have mental states and help to explain the mind);
  • Computational states are implementation-independent — in other words, it is the software that determines the computational state, not the hardware (which is why the brain, being hardware, is irrelevant); and that
  • Since implementation is unimportant, the only empirical data that matters is how the system functions; hence the Turing test is definitive.

Consciousness

Searle's original presentation emphasized "understanding"—that is, mental states with what philosophers call "intentionality"—and did not directly address other closely related ideas such as "consciousness". However, in more recent presentations Searle has included consciousness as the real target of the argument.{{#invoke:Footnotes|sfn}} Template:Quote David Chalmers writes "it is fairly clear that consciousness is at the root of the matter" of the Chinese room.{{#invoke:Footnotes|sfn}}

Colin McGinn argues that the Chinese room provides strong evidence that the hard problem of consciousness is fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency or some clever simulation inhabits the room.{{#invoke:Footnotes|sfn}}

Searle argues that this is only true for an observer outside of the room. The whole point of the thought experiment is to put someone inside the room, where they can directly observe the operations of consciousness. Searle claims that from his vantage point within the room there is nothing he can see that could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that can speak Chinese.

Applied Ethics

File:USS Vincennes (CG-49) Aegis large screen displays.jpg
Sitting in the Combat Information Center aboard a warship - proposed as a real-life analog to the Chinese Room

Patrick Hew used the Chinese Room argument to deduce requirements from military command and control systems if they are to preserve a commander's moral agency. He drew an analogy between a commander in their command center and the person in the Chinese Room, and analyzed it under a reading of Aristotle’s notions of 'compulsory' and 'ignorance'. Information could be 'down converted' from meaning to symbols, and manipulated symbolically, but moral agency could be undermined if there was inadequate 'up conversion' into meaning. Hew cited examples from the USS Vincennes incident.[6]

Computer science

The Chinese room argument is primarily an argument in the philosophy of mind, and both major computer scientists and artificial intelligence researchers consider it irrelevant to their fields.{{#invoke:Footnotes|sfn}} However, several concepts developed by computer scientists are essential to understanding the argument, including symbol processing, Turing machines, Turing completeness, and the Turing test.

Strong AI vs. AI research

Searle's arguments are not usually considered an issue for AI research. Stuart Russell and Peter Norvig observe that most AI researchers "don't care about the strong AI hypothesis—as long as the program works, they don't care whether you call it a simulation of intelligence or real intelligence."{{#invoke:Footnotes|sfn}} The primary mission of artificial intelligence research is only to create useful systems that act intelligently, and it does not matter if the intelligence is "merely" a simulation.

Searle does not disagree that AI research can create machines that are capable of highly intelligent behavior. The Chinese room argument leaves open the possibility that a digital machine could be built that acts more intelligently than a person, but does not have a mind or intentionality in the same way that brains do. Indeed, Searle writes that "the Chinese room argument ... assumes complete success on the part of artificial intelligence in simulating human cognition."{{#invoke:Footnotes|sfn}}

Searle's "strong AI" should not be confused with "strong AI" as defined by Ray Kurzweil and other futurists,{{#invoke:Footnotes|sfn}} who use the term to describe machine intelligence that rivals or exceeds human intelligence. Kurzweil is concerned primarily with the amount of intelligence displayed by the machine, whereas Searle's argument sets no limit on this. Searle argues that even a super-intelligent machine would not necessarily have a mind and consciousness.

Turing test

Template:Main article

The Chinese room implements a version of the Turing test.{{#invoke:Footnotes|sfn}} Alan Turing introduced the test in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.

Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote: Template:Quote

To Searle, as a philosopher investigating in the nature of mind and consciousness, these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would.

Symbol processing

Template:Main article The Chinese room (and all modern computers) manipulate physical objects in order to carry out calculations and do simulations. AI researchers Allen Newell and Herbert A. Simon called this kind of machine a physical symbol system. It is also equivalent to the formal systems used in the field of mathematical logic.

Searle emphasizes the fact that this kind of symbol manipulation is syntactic (borrowing a term from the study of grammar). The computer manipulates the symbols using a form of syntax rules, without any knowledge of the symbol's semantics (that is, their meaning).

Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, artificial general intelligence. They framed this as a philosophical position, the physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action."{{#invoke:Footnotes|sfn}}{{#invoke:Footnotes|sfn}} The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.

Chinese room and Turing completeness

Template:See also The Chinese room has a design analogous to that of a modern computer. It has a Von Neumann architecture, which consists of a program (the book of instructions), some memory (the papers and file cabinets), a CPU which follows the instructions (the man), and a means to write symbols in memory (the pencil and eraser). A machine with this design is known in theoretical computer science as "Turing complete", because it has the necessary machinery to carry out any computation that a Turing machine can do, and therefore it is capable of doing a step-by-step simulation of any other digital machine, given enough memory and time. Alan Turing writes, "all digital computers are in a sense equivalent."{{#invoke:Footnotes|sfn}} The widely accepted Church-Turing thesis holds that any function computable by an effective procedure is computable by a Turing machine.

The Turing completeness of the Chinese room implies that it can do whatever any other digital computer can do (albeit much, much more slowly). Thus, if the Chinese room does not or can not contain a Chinese-speaking mind, then no other digital computer can contain a mind. Some replies to Searle begin by arguing that the room, as described, cannot have a Chinese-speaking mind. Arguments of this form, according to Stevan Harnad, are "no refutation (but rather an affirmation)"{{#invoke:Footnotes|sfn}} of the Chinese room argument, because these arguments actually imply that no digital computers can have a mind.{{#invoke:Footnotes|sfn}}

There are some critics, such as Hanoch Ben-Yami, who argue that the Chinese room cannot simulate all the abilities of a digital computer, such as being able to determine the current time.{{#invoke:Footnotes|sfn}}

Complete argument

Searle has produced a more formal version of the argument of which the Chinese Room forms a part. He presented the first version in 1984. The version given below is from 1990.[7]Template:Efn The only part of the argument which should be controversial is A3 and it is this point which the Chinese room thought experiment is intended to prove.Template:Efn

He begins with three axioms:

(A1) "Programs are formal (syntactic)."
A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols. It knows where to put the symbols and how to move them around, but it doesn't know what they stand for or what they mean. For the program, the symbols are just physical objects like any others.
(A2) "Minds have mental contents (semantics)."
Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.
(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics."
This is what the Chinese room thought experiment is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.

Searle posits that these lead directly to this conclusion:

(C1) Programs are neither constitutive of nor sufficient for minds.
This should follow without controversy from the first three: Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore no programs are minds.

This much of the argument is intended to show that artificial intelligence can never produce a machine with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a different issue. Is the human brain running a program? In other words, is the computational theory of mind correct?Template:Efn He begins with an axiom that is intended to express the basic modern scientific consensus about brains and minds:

(A4) Brains cause minds.

Searle claims that we can derive "immediately" and "trivially"{{#invoke:Footnotes|sfn}} that:

(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains.
Brains must have something that causes a mind to exist. Science has yet to determine exactly what it is, but it must exist, because minds exist. Searle calls it "causal powers". "Causal powers" is whatever the brain uses to create a mind. If anything else can cause a mind to exist, it must have "equivalent causal powers". "Equivalent causal powers" is whatever else that could be used to make a mind.

And from this he derives the further conclusions:

(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.
This follows from C1 and C2: Since no program can produce a mind, and "equivalent causal powers" produce minds, it follows that programs do not have "equivalent causal powers."
(C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program.
Since programs do not have "equivalent causal powers", "equivalent causal powers" produce minds, and brains produce minds, it follows that brains do not use programs to produce minds.


System reply
The basic "systems reply" argues that it is the "whole system" that understands Chinese.[8]Template:Efn While the man understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese. "Here, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part" Searle explains.{{#invoke:Footnotes|sfn}} The fact that man does not understand Chinese is irrelevant, because it is only the system as a whole that matters.

Searle notes that (in this simple version of the reply) the "system" is nothing more than a collection of ordinary physical objects; it grants the power of understanding and consciousness to "the conjunction of that person and bits of paper"{{#invoke:Footnotes|sfn}} without making any effort to explain how this pile of objects has become a conscious, thinking being. Searle argues that no reasonable person should be satisfied with the reply, unless they are "under the grip of an ideology;"{{#invoke:Footnotes|sfn}} In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing "system", and does not require anything resembling the actual biology of the brain.

To clarify the distinction between the simple systems reply given above and virtual mind reply, David Cole notes that two simulations could be running on one system at the same time: one speaking Chinese and one speaking Korean. While there is only one system, there can be multiple "virtual minds," thus the "system" cannot be the "mind".{{#invoke:Footnotes|sfn}}

Searle responds that such a mind is, at best, a simulation, and writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."{{#invoke:Footnotes|sfn}} Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that 'it isn't really a calculator', because the physical attributes of the device do not matter."{{#invoke:Footnotes|sfn}} The question is, is the human mind like the pocket calculator, essentially composed of information? Or is the mind like the rainstorm, something other than a computer, and not realizable in full by a computer simulation? (The issue of simulation is also discussed in the article synthetic intelligence.)

These replies provide an explanation of exactly who it is that understands Chinese. If there is something besides the man in the room that can understand Chinese, Searle can't argue that (1) the man doesn't understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.Template:Efn

However, the replies, by themselves, do not prove that strong AI is true, either: they provide no evidence that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing Test. As Searle writes "the systems reply simply begs the question by insisting that system must understand Chinese."{{#invoke:Footnotes|sfn}}

Robot and semantics replies: finding the meaning

As far as the person in the room is concerned, the symbols are just meaningless "squiggles." But if the Chinese room really "understands" what it is saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize. These replies address Searle's concerns about intentionality, symbol grounding and syntax vs. semantics.

Robot reply
Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "causal connection" between the symbols and things they represent.[9]Template:Efn Hans Moravec comments: "If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."[10]Template:Efn
Searle's reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn't see what comes into the robot's eyes."{{#invoke:Footnotes|sfn}} (See Mary's room for a similar thought experiment.)
Derived meaning
Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed the knowledge base in his file cabinet. The symbols Searle manipulates are already meaningful, they're just not meaningful to him.[11]Template:Efn
Searle says that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, according to Searle, has no understanding of its own.Template:Efn
Commonsense knowledge / contextualist reply
Some have argued that the meanings of the symbols would come from a vast "background" of commonsense knowledge encoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning.{{#invoke:Footnotes|sfn}}Template:Efn
Searle agrees that this background exists, but he does not agree that it can be built into programs. Hubert Dreyfus has also criticized the idea that the "background" can be represented symbolically.{{#invoke:Footnotes|sfn}}

To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."{{#invoke:Footnotes|sfn}}Template:Efn

However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that roboticists can supply.

Speed and complexity: appeals to intuition

The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies. These arguments, if accepted, prevent Searle from claiming that his conclusion is obvious by undermining the intuitions that his certainty requires.

Several critics believe that Searle's argument relies entirely on intuitions. Ned Block writes "Searle's argument depends for its force on intuitions that certain entities do not think."[12] Daniel Dennett describes the Chinese room argument as a misleading "intuition pump"{{#invoke:Footnotes|sfn}} and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the 'obvious' conclusion from it."{{#invoke:Footnotes|sfn}}

Some of the arguments above also function as appeals to intuition, especially those that are intended to make it seem more plausible that the Chinese room contains a mind, which can include the robot, commonsense knowledge, brain simulation and connectionist replies. Several of the replies above also address the specific issue of complexity. The connectionist reply emphasizes that a working artificial intelligence system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge", as Daniel Dennett explains.{{#invoke:Footnotes|sfn}}

Speed and complexity replies
The speed at which human brains process information is (by some estimates) 100 billion operations per second.{{#invoke:Footnotes|sfn}} Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.[13]Template:Efn


Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')"{{#invoke:Footnotes|sfn}}Template:Efn

Other minds reply
This reply points out that Searle's argument is a version of the problem of other minds, applied to machines. There is no way we can determine if other people's subjective experience is the same as our own. We can only study their behavior (i.e., by giving them our own Turing test). Critics of Searle argue that he is holding the Chinese room to a higher standard than we would hold an ordinary person.

Alan Turing anticipated Searle's line of argument (which he called "The Argument from Consciousness") in 1950 and makes the other minds reply.{{#invoke:Footnotes|sfn}} He noted that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks."{{#invoke:Footnotes|sfn}} The Turing test simply extends this "polite convention" to machines. He doesn't intend to solve the problem of other minds (for machines or people) and he do

See also

Problem

As the process of collecting a User's Identifiers, Attributes etc. (which we can call claims) from a Distributed Identity resident across the cloud in unrelated databases and block-chains, the challenge of turning this data into an Authorization decision becomes quite complex. This is a problem that once was decided by the Knowledge of an expert, for example large bank loans needed to be approved by a vice president and large corporate checks need to be signed by the comptroller of the company.

Solution

The working solution can be viewed as an extension of an existing process in ecommerce where the Relying Party collects the claims about the user and the context of the request (which will likely include user behavior and value of the transaction) into a Trust Vector for processing by a Fraud Detection Service. The result will be used to make the Authorization decision, or it might initiate a continued collection of user claims for a retry.

References

  1. John Searle, "Minds, Brains, and Programs", (1980) Behavioral and Brain Sciences
  2. Template:Harvnb; Template:Harvnb
  3. In Akman's review of Mind Design II
  4. Quoted in Template:Harvnb.
  5. Quoted in Template:Harvnb and Template:Harvnb.
  6. Template:Cite journal
  7. Template:Harvnb; Template:Harvnb.
  8. Template:Harvnb; Template:Harvnb; Template:Harvnb; Template:Harvnb, Template:Harvnb; Template:Harvnb; Template:Harvnb.
  9. Template:Harvnb; Template:Harvnb; Template:Harvnb; Template:Harvnb.
  10. Quoted in Template:Harvnb
  11. Template:Harvnb; Template:Harvnb.
  12. Quoted in Template:Harvnb.
  13. Template:Harvnb; Template:Harvnb; Template:Harvnb.