Difference between revisions of "Knowledge"

From MgmtWiki
Jump to: navigation, search
(Context)
(Solution)
Line 24: Line 24:
  
 
The working solution can be viewed as an extension of an existing process in ecommerce where the [[Relying Party]] collects the claims about the user and the context of the request (which will likely include user behavior and value of the transaction) into a [[Trust Vector]] for processing by a [[Fraud Detection]] Service. The result will be used to make the [[Authorization]] decision, or it might initiate a continued collection of user claims for a retry.
 
The working solution can be viewed as an extension of an existing process in ecommerce where the [[Relying Party]] collects the claims about the user and the context of the request (which will likely include user behavior and value of the transaction) into a [[Trust Vector]] for processing by a [[Fraud Detection]] Service. The result will be used to make the [[Authorization]] decision, or it might initiate a continued collection of user claims for a retry.
 +
 +
In other words, with a distributed system, the [[Relying Party]] needs to be self-aware to the extent that it can determine what [[Knowledge]] it has about a [[User]] so that if that [[Knowledge]] is insufficient to [[Authorize]] access, that the [[Relying Party]] has [[Knowledge]] to seek out the additional claims it needs to grant access.
  
 
==References==
 
==References==
 
See also [[Emergent behavior]]
 
See also [[Emergent behavior]]

Revision as of 13:07, 19 August 2018

Full Title or Meme

Facts, information, and skills acquired by a person through experience and education; the theoretical or practical Understanding of a subject.

Context

A significant number of philosophers have convinced themselves that there is no way that a computer could ever be said to have human Knowledge of any subject.

  • Alan Turing: The "standard interpretation" of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which of two players: A or B on the other end of teletypewriter links is a computer and which is a human. The interrogator is limited to using the responses to typed in questions to make the determination. The Turing test which was introduced in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote: Template:Quote To Searle, as a philosopher investigating in the nature of Mind and consciousness, these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would.

Alan Turing anticipated Searle's line of argument (which he called "The Argument from Consciousness") in 1950 and makes the other minds reply.{{#invoke:Footnotes|sfn}} He noted that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks."{{#invoke:Footnotes|sfn}} The Turing test simply extends this "polite convention" to machines.

  • Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, artificial general intelligence. They framed this as a philosophical position, the physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action."{{#invoke:Footnotes|sfn}}{{#invoke:Footnotes|sfn}} The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.
  • Haldane: was skeptical of Materialist views and said in 1937: "If my options are the result of chemical processes going on in my brain, they are determined by the laws of chemistry, not the laws of logic".[1]
  • Searle: The Chinese room argument implements a version of the Turing test where a program in the computer is held to be no different from a room full of Chinese scholars who cannot understand English, but can translate English to Chinese using rules in books and cards. Searle asserts that this Symbol Manipulation cannot result in a computer having a "mind", "understanding" or "consciousness" regardless of how intelligently or human-like the program may make the computer behave.[2] Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false. Most of the discussion consists of attempts to refute it. Searle's argument has become "something of a classic in cognitive science,". Varol Akman agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".[3]. To any of other suggestions, Searle's response is always the same: no matter how much Knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."{{#invoke:Footnotes|sfn}}Template:Efn

But all this discussion by philosophers boils down to a silly verbalism, an argument about the meaning of words, which is why this wiki seeks to define the words that it depends on. In this wiki words mean exactly what we want them to mean, no more and no less. Or, in other words, we chose words and their definitions as is demanded by Popper.

Problem

As the process of collecting a User's Identifiers, Attributes etc. (in this wiki called Claims) from a Distributed Identity hosted across the cloud in unrelated databases and block-chains, the challenge of turning this data into an Authorization decision becomes quite complex. This is a problem that once was decided by the Knowledge of an expert, for example large bank loans needed to be approved by a vice president and large corporate checks need to be signed by the comptroller of the company.

Solution

Knowledge comes in two forms (which are not completely distinct):

  1. Active knowledge that is accessible to a Mind in a form that can be (nearly) immediately accessed and used.
  2. Passive knowledge that is coded in some form that can be absorbed by a Mind and (hopefully) turned into active knowledge. (This process is also called learning.)

The working solution can be viewed as an extension of an existing process in ecommerce where the Relying Party collects the claims about the user and the context of the request (which will likely include user behavior and value of the transaction) into a Trust Vector for processing by a Fraud Detection Service. The result will be used to make the Authorization decision, or it might initiate a continued collection of user claims for a retry.

In other words, with a distributed system, the Relying Party needs to be self-aware to the extent that it can determine what Knowledge it has about a User so that if that Knowledge is insufficient to Authorize access, that the Relying Party has Knowledge to seek out the additional claims it needs to grant access.

References

See also Emergent behavior
  1. J.B.S Haldane, The Inequality of Man. PENGUIN (PELICAN) (1937)
  2. John Searle, "Minds, Brains, and Programs", (1980) Behavioral and Brain Sciences
  3. In Akman's review of Mind Design II