Difference between revisions of "Knowledge"
(→Context) |
(→References) |
||
Line 35: | Line 35: | ||
==References== | ==References== | ||
See also [[Emergent behavior]] | See also [[Emergent behavior]] | ||
+ | |||
+ | [[Category:Glossary]] |
Revision as of 13:45, 21 February 2019
Contents
Full Title or Meme
Facts, information, and skills acquired by a person through experience and education; the theoretical or practical Understanding of a subject.
Context
A significant number of philosophers have convinced themselves that there is no way that a computer could ever be said to have human Knowledge of any subject.
- Alan Turing: The "standard interpretation" of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which of two players: A or B on the other end of teletypewriter links is a computer and which is a human. The interrogator is limited to using the responses to typed in questions to make the determination. The Turing test which was introduced in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote: "I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper." Turing anticipated Searle's Chinese room argument (which he called "The Argument from Consciousness") in 1950 by noting that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks." The Turing test simply extends this "polite convention" to machines.
- Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, artificial general intelligence. They framed this as a philosophical position, the hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action." The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.
- Haldane: was skeptical of Materialist views and said in 1932: "If my options are the result of chemical processes going on in my brain, they are determined by the laws of chemistry, not the laws of logic".[1]
- Searle: The Chinese room argument implements a version of the Turing test where a program in the computer is held to be no different from a room full of Chinese scholars who cannot understand English, but can translate English to Chinese using rules in books and cards. Searle asserts that this Symbol Manipulation cannot result in a computer having a "mind", "understanding" or "consciousness" regardless of how intelligently or human-like the program may make the computer behave.[2] Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false. Most of the commentary about Knowledge seems to be an attempt to refute Searle's argument, which has become "something of a classic in cognitive science,". Varol Akman agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".[3]. To any of other suggestions, Searle's response is always the same: no matter how much Knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics." The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would.
- Dennett looks at all of Artificial Intelligence, and most particularly Deep Learning as "parasitic: deep learning (so far) discriminates but doesn't notice, That is the flood of data a system takes in does hot have relevance for the system except a more 'food' to 'digest'" and further that "deep-learning machines are dependent on human understanding."[4] He seems to be unaware of the symbiotic relationship between humans and Wikipedia or the possibility of interaction between humans and Identity Management systems as further explicated on the page Bayesian Identity Proofing. But most damning of Dennett, and in fact all of these philosophers, is that they think of human Knowledge as a high point to be reached for and emulated, rather than as the distillation of the rantings and ravings of a self-absorbed bunch of apes.
But all this discussion by philosophers boils down to a silly verbalism, an argument about the meaning of words, which is why this wiki seeks to define the words that it depends on. In this wiki words mean exactly what we want them to mean, no more and no less. Or, in other words, we chose words and their definitions as is demanded by Popper.
Problem
- As the process of collecting a User's Identifiers, Attributes etc. (in this wiki called Claims) from a Distributed Identity hosted across the cloud in unrelated databases and block-chains, the challenge of turning this data into an Authorization decision becomes quite complex. This is a problem that once was decided by the Knowledge of an expert and the Understanding of the context, for example large bank loans needed to be approved by a vice president and large corporate checks need to be signed by the comptroller of the company.
- Human Knowledge is, after all, "Human, all to Human"[5]. Which means it cannot escape the limitations and prejudices of actual people going about their daily business.
Solution
Knowledge comes in two forms (which are not completely distinct):
- Active knowledge that is accessible to a Mind in a form that can be (nearly) immediately accessed and used.
- Passive knowledge that is coded in some form that can be absorbed by a Mind and (hopefully) turned into active knowledge. (This process is also called learning.)
Arthur Shopenhauer described the two forms of knowledge thus:[6]
As the biggest library, if it is in disorder, is not as useful as a small but well-arranged one; so you may accumulate a vast amount of knowledge but it will be of far less value than a much smaller amount if you have not thought it over for yourself, because only through ordering what you know by comparing every truth with every other truth can you take complete possession of your knowledge and get it into your power. You can think about only what you know, so you ought to learn something, on the other hand, you can know only what you have thought about.
Knowledge in Identity Management
The working solution for Authentication and Authorization can be viewed as an extension of an existing process in ecommerce where the Relying Party collects the claims about the user and the context of the request (which will likely include user behavior and value of the transaction) into a Trust Vector for processing by a Fraud Detection Service. The result will be used to make the Authorization decision, or (if insufficient) it might initiate a continued collection of user claims for a retry.
In other words, with a Distributed Identity system, the Relying Party needs to be self-aware to the extent that it can determine what Knowledge it has about a User so that if that knowledge is insufficient to Authorize access, that the Relying Party has Knowledge to seek out the additional claims it needs to grant access. The current state-of-the-art is for Understanding of the context to be build into the algorithm that collects context, but that is starting to change with the inclusion of AI into the Fraud Detection systems.
References
See also Emergent behavior- ↑ J.B.S Haldane, Some Consequences of Materialism in The Inequality of Man. PENGUIN (PELICAN) (1932)
- ↑ John Searle, "Minds, Brains, and Programs", (1980) Behavioral and Brain Sciences
- ↑ In Akman's review of Mind Design II
- ↑ Daniel C. Dennett, From Bacteria to Bach and Back: The Evolution of Minds. (2017-02-17) ISBN 978-0393242072
- ↑ Friedrich Nietzsche, Human, All Too Human. (original 1878) ISBN 9780140446173
- ↑ Arthur Shopenhauer, Essays and Aphorisms. trans. R.J. Holllingdale, (1970) Penguin p. 89 ISBN 978-0140442274