Difference between revisions of "Knowledge"
(→Context) |
m (→Context) |
||
(37 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
A significant number of philosophers have convinced themselves that there is no way that a computer could ever be said to have human [[Knowledge]] of any subject. | A significant number of philosophers have convinced themselves that there is no way that a computer could ever be said to have human [[Knowledge]] of any subject. | ||
− | *Alan Turing: The "standard interpretation" of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which of two players: A or B on the other end of teletypewriter links is a computer and which is a human. The interrogator is limited to using the responses to typed in questions to make the determination. The Turing test which was introduced in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of " | + | *Alan Turing: The "standard interpretation" of the [[Turing Test]], in which player C, the interrogator, is given the task of trying to determine which of two players: A or B on the other end of teletypewriter links is a computer and which is a human. The interrogator is limited to using the responses to typed in questions to make the determination. The Turing test which was introduced in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "[[Consciousness]]" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote: "I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper." Turing anticipated Searle's Chinese room argument (which he called "The Argument from Consciousness") in 1950 by noting that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks." The [[Turing Test]] simply extends this "polite convention" to machines. |
− | |||
− | *Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, [[artificial general intelligence]]. They framed this as a philosophical position, the hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action." The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind. | + | *Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, [[Artificial Intelligence|artificial general intelligence]]. They framed this as a philosophical position, the hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action." The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind. |
− | |||
− | * | + | *Haldane: was skeptical of Materialist views and said in 1932: "If my options are the result of chemical processes going on in my brain, they are determined by the laws of chemistry, not the laws of logic".<ref>J.B.S Haldane, ''Some Consequences of Materialism'' in ''The Inequality of Man.'' PENGUIN (PELICAN) (1932)</ref> |
− | + | *Searle: The '''Chinese room argument''' implements a version of the [[Turing Test]] where a program in the computer is held to be no different from a room full of Chinese scholars who cannot understand English, but can translate English to Chinese using rules in books and cards. Searle asserts that this [[Symbol Manipulation]] cannot result in a computer having a "mind", "[[intentionality|understanding]]" or "consciousness" regardless of how intelligently or human-like the program may make the computer behave.<ref>John Searle, "Minds, Brains, and Programs", (1980) Behavioral and Brain Sciences </ref> Searle argues that, without "understanding" (or "[[intentionality]]"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false. Most of the commentary about [[Knowledge]] seems to be an attempt to refute Searle's argument, which has become "something of a classic in cognitive science,". Varol Akman agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".<ref>In Akman's [http://www.google.com/search?client=safari&rls=en&q=cogprints.org/539/0/md2.ps&ie=UTF-8&oe=UTF-8 review of ''Mind Design II'']</ref>. To any of other suggestions, Searle's response is always the same: no matter how much [[Knowledge]] is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are [[syntax|syntactic]] and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics." The Chinese room is designed to show that the [[Turing Test]] is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would. | |
− | But all this discussion by philosophers boils down to a silly verbalism, an argument about the meaning of words, which is why this wiki seeks to define the words that | + | |
+ | *Dennett looks at all of [[Artificial Intelligence]], and most particularly Deep Learning as "parasitic: deep learning (so far) discriminates but doesn't notice, That is the flood of data a system takes in does not have relevance for the system except a more 'food' to 'digest'" and further that "deep-learning machines are dependent on human understanding."<ref> Daniel C. Dennett, ''From Bacteria to Bach and Back: The Evolution of Minds.'' (2017-02-17) ISBN 978-0393242072</ref> He seems to be unaware of the symbiotic relationship between humans and Wikipedia or the possibility of interaction between humans and [[Identity Management]] systems as further explicated on the page [[Bayesian Identity Proofing]]. But most damning of Dennett, and in fact all of these philosophers, is that they think of human [[Knowledge]] as a high point to be reached for and emulated, rather than as the distillation of the rantings and ravings of a self-absorbed bunch of apes. | ||
+ | |||
+ | But all this discussion by philosophers boils down to a silly verbalism, an argument about the meaning of words, which is why this wiki seeks to define the words that are used here. In this wiki words mean exactly what we want them to mean, no more and no less. Or, in other words, we chose words and their definitions as is demanded by Popper<ref>Karl Popper ''The Open Society and Its Enemies'' (original 1945) Routledge ISBN 978-0415610216</ref>. | ||
==Problem== | ==Problem== | ||
− | As the process of collecting a [[User]]'s [[Identifier]]s, [[Attribute]]s etc. ( | + | *As the process of collecting a [[User]]'s [[Identifier]]s, [[Attribute]]s etc. (aka [[Claim]]s) from a [[Distributed Identity]] hosted across the cloud in unrelated databases and block-chains, the challenge of turning this data into an [[Authorization]] decision becomes quite complex. This is a problem that once was decided by the [[Knowledge]] of an expert and the [[Understanding]] of the context, for example large bank loans needed to be approved by a vice president and large corporate checks need to be signed by the comptroller of the company. |
+ | *Human [[Knowledge]] is, after all, "Human, all to Human"<ref>Friedrich Nietzsche, ''Human, All Too Human''. (original 1878) ISBN 9780140446173</ref>. Which means it cannot escape the limitations and prejudices of actual people going about their daily business. | ||
==Solution== | ==Solution== | ||
Line 24: | Line 26: | ||
#Passive knowledge that is coded in some form that can be absorbed by a [[Mind]] and (hopefully) turned into active knowledge. (This process is also called learning.) | #Passive knowledge that is coded in some form that can be absorbed by a [[Mind]] and (hopefully) turned into active knowledge. (This process is also called learning.) | ||
− | The working solution can be viewed as an extension of an existing process in ecommerce where the [[Relying Party]] collects the claims about the user and the context of the request (which will likely include user behavior and value of the transaction) into a [[Trust Vector]] for processing by a [[Fraud Detection]] Service. The result will be used to make the [[Authorization]] decision, or it might initiate a continued collection of user claims for a retry. | + | Arthur Shopenhauer described the two forms of knowledge thus:<ref>Arthur Shopenhauer, Essays and Aphorisms. trans. R.J. Holllingdale, (1970) Penguin p. 89 ISBN 978-0140442274 </ref> |
+ | <blockquote>As the biggest library, if it is in disorder, is not as useful as a small but well-arranged one; so you may accumulate a vast amount of knowledge but it will be of far less value than a much smaller amount if you have not thought it over for yourself, because only through ordering what you know by comparing every truth with every other truth can you take complete possession of your knowledge and get it into your power. You can think about only what you know, so you ought to learn something, on the other hand, you can know only what you have thought about.</blockquote> | ||
+ | ===Knowledge in Identity Management=== | ||
+ | The working solution for [[Authentication]] and [[Authorization]] can be viewed as an extension of an existing process in ecommerce where the [[Relying Party]] collects the claims about the user and the context of the request (which will likely include user behavior and value of the transaction) into a [[Trust Vector]] for processing by a [[Fraud Detection]] Service. The result will be used to make the [[Authorization]] decision, or (if insufficient) it might initiate a continued collection of user claims for a retry. | ||
− | In other words, with a [[Distributed Identity]] system, the [[Relying Party]] needs to be self-aware to the extent that it can determine what [[Knowledge]] it has about a [[User]] so that if that | + | In other words, with a [[Distributed Identity]] system, the [[Relying Party]] needs to be self-aware to the extent that it can determine what [[Knowledge]] it has about a [[User]] so that if that knowledge is insufficient to [[Authorization|Authorize]] access, that the [[Relying Party]] has [[Knowledge]] to seek out the additional claims it needs to grant access. The current state-of-the-art is for [[Understanding]] of the context to be built into the algorithm that collects context, but that is starting to change with the inclusion of [[Artificial Intelligence]] into the [[Fraud Detection]] systems. |
==References== | ==References== | ||
− | See also [[Emergent | + | <references /> |
+ | |||
+ | ===Other Material=== | ||
+ | * See also wiki [[Emergent Behavior]] | ||
+ | * and wiki [[Wisdom]] | ||
+ | |||
+ | |||
+ | [[Category: Glossary]] | ||
+ | [[Category: Philosophy]] | ||
+ | [[Category: Artificial Intelligence]] |
Latest revision as of 16:49, 31 July 2022
Contents
Full Title or Meme
Facts, information, and skills acquired by a person through experience and education; the theoretical or practical Understanding of a subject.
Context
A significant number of philosophers have convinced themselves that there is no way that a computer could ever be said to have human Knowledge of any subject.
- Alan Turing: The "standard interpretation" of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which of two players: A or B on the other end of teletypewriter links is a computer and which is a human. The interrogator is limited to using the responses to typed in questions to make the determination. The Turing test which was introduced in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "Consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote: "I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper." Turing anticipated Searle's Chinese room argument (which he called "The Argument from Consciousness") in 1950 by noting that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks." The Turing Test simply extends this "polite convention" to machines.
- Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, artificial general intelligence. They framed this as a philosophical position, the hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action." The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.
- Haldane: was skeptical of Materialist views and said in 1932: "If my options are the result of chemical processes going on in my brain, they are determined by the laws of chemistry, not the laws of logic".[1]
- Searle: The Chinese room argument implements a version of the Turing Test where a program in the computer is held to be no different from a room full of Chinese scholars who cannot understand English, but can translate English to Chinese using rules in books and cards. Searle asserts that this Symbol Manipulation cannot result in a computer having a "mind", "understanding" or "consciousness" regardless of how intelligently or human-like the program may make the computer behave.[2] Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false. Most of the commentary about Knowledge seems to be an attempt to refute Searle's argument, which has become "something of a classic in cognitive science,". Varol Akman agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".[3]. To any of other suggestions, Searle's response is always the same: no matter how much Knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics." The Chinese room is designed to show that the Turing Test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would.
- Dennett looks at all of Artificial Intelligence, and most particularly Deep Learning as "parasitic: deep learning (so far) discriminates but doesn't notice, That is the flood of data a system takes in does not have relevance for the system except a more 'food' to 'digest'" and further that "deep-learning machines are dependent on human understanding."[4] He seems to be unaware of the symbiotic relationship between humans and Wikipedia or the possibility of interaction between humans and Identity Management systems as further explicated on the page Bayesian Identity Proofing. But most damning of Dennett, and in fact all of these philosophers, is that they think of human Knowledge as a high point to be reached for and emulated, rather than as the distillation of the rantings and ravings of a self-absorbed bunch of apes.
But all this discussion by philosophers boils down to a silly verbalism, an argument about the meaning of words, which is why this wiki seeks to define the words that are used here. In this wiki words mean exactly what we want them to mean, no more and no less. Or, in other words, we chose words and their definitions as is demanded by Popper[5].
Problem
- As the process of collecting a User's Identifiers, Attributes etc. (aka Claims) from a Distributed Identity hosted across the cloud in unrelated databases and block-chains, the challenge of turning this data into an Authorization decision becomes quite complex. This is a problem that once was decided by the Knowledge of an expert and the Understanding of the context, for example large bank loans needed to be approved by a vice president and large corporate checks need to be signed by the comptroller of the company.
- Human Knowledge is, after all, "Human, all to Human"[6]. Which means it cannot escape the limitations and prejudices of actual people going about their daily business.
Solution
Knowledge comes in two forms (which are not completely distinct):
- Active knowledge that is accessible to a Mind in a form that can be (nearly) immediately accessed and used.
- Passive knowledge that is coded in some form that can be absorbed by a Mind and (hopefully) turned into active knowledge. (This process is also called learning.)
Arthur Shopenhauer described the two forms of knowledge thus:[7]
As the biggest library, if it is in disorder, is not as useful as a small but well-arranged one; so you may accumulate a vast amount of knowledge but it will be of far less value than a much smaller amount if you have not thought it over for yourself, because only through ordering what you know by comparing every truth with every other truth can you take complete possession of your knowledge and get it into your power. You can think about only what you know, so you ought to learn something, on the other hand, you can know only what you have thought about.
Knowledge in Identity Management
The working solution for Authentication and Authorization can be viewed as an extension of an existing process in ecommerce where the Relying Party collects the claims about the user and the context of the request (which will likely include user behavior and value of the transaction) into a Trust Vector for processing by a Fraud Detection Service. The result will be used to make the Authorization decision, or (if insufficient) it might initiate a continued collection of user claims for a retry.
In other words, with a Distributed Identity system, the Relying Party needs to be self-aware to the extent that it can determine what Knowledge it has about a User so that if that knowledge is insufficient to Authorize access, that the Relying Party has Knowledge to seek out the additional claims it needs to grant access. The current state-of-the-art is for Understanding of the context to be built into the algorithm that collects context, but that is starting to change with the inclusion of Artificial Intelligence into the Fraud Detection systems.
References
- ↑ J.B.S Haldane, Some Consequences of Materialism in The Inequality of Man. PENGUIN (PELICAN) (1932)
- ↑ John Searle, "Minds, Brains, and Programs", (1980) Behavioral and Brain Sciences
- ↑ In Akman's review of Mind Design II
- ↑ Daniel C. Dennett, From Bacteria to Bach and Back: The Evolution of Minds. (2017-02-17) ISBN 978-0393242072
- ↑ Karl Popper The Open Society and Its Enemies (original 1945) Routledge ISBN 978-0415610216
- ↑ Friedrich Nietzsche, Human, All Too Human. (original 1878) ISBN 9780140446173
- ↑ Arthur Shopenhauer, Essays and Aphorisms. trans. R.J. Holllingdale, (1970) Penguin p. 89 ISBN 978-0140442274
Other Material
- See also wiki Emergent Behavior
- and wiki Wisdom