Difference between revisions of "Hunting"

From MgmtWiki
Jump to: navigation, search
(Solutions)
(Solutions)
 
(15 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
==Full Title or Meme==
 
==Full Title or Meme==
A process of seeking a better solution.
+
A process (usually expressed as an algorithm) for seeking a better solution.
  
 
==Context==
 
==Context==
[[Bayesian Identity Proofing]] provides the means for a collection of authentication and verification steps to be validated.
+
The process of improving the algorithms used in [[Identity Management]] is an on-going effort to select and evaluate changes to the process used. [[Bayesian Identity Proofing]] provides the means for evaluating a [[Trust Vector]] of [[Attribute]]s and [[Identifier]]s as a series of authentication and verification steps to be validated. Such algorithms are typically improved by selecting changes and then testing them to assure that both the [[Assurance]] of the data collected as well as the [[User Experience]] has improved. In other words, this is about [[Evolution]]. J. B. S. Haldane was the first to create "A Mathematical Theory of Natural and Artificial Selection"<ref> J. B. S. Haldane, ''A Mathematical Theory of Natural and Artificial Selection.'' a ten part essay, the first is published by Cambridge Philosophical Society '''23''' pp 19-41 (1924) https://web.archive.org/web/20041123231709/http://www.blackwellpublishing.com:80/ridley/classictexts/haldane1.pdf</ref>. He required a determination of two variables (1) the intensity of selection and (2) The rate at which the old methods are removed from circulation. These same method can be used today in the process of tuning the [[Hunting]] algorithm to create the most rapid change possible that both increases the quality of the algorithm without destroying the viability of the [[Entity]] using the algorithm all while still thriving in the constantly changing [[Ecosystem]].
  
 
==Problems==
 
==Problems==
Ensuring that any solution remains optimal by continually [[Hunting]] for better ones near the current one.
+
* Ensuring that any solution remains optimal by continually [[Hunting]] for better ones only near the current one is likely to trapped in a sub-optimal local maximum value that could be improved by looking further in the range space of possible solutions.
 +
* Geoffrey Hinton put it this way<ref name=hinton /><blockquote>You want to be good at doing the things you haven't yet seen, things that might be somewhat different from the training data.</blockquote>
  
 
==Solutions==
 
==Solutions==
*A conservative [[Hunting]] algorithm will seek only solutions near to the current one.
+
These are some of the considerations to be used for determining (1) the rate and complexity of the changes introduced, and (2) the rate at which the older algorithms are retired. With the assumption that the [[Web Site]] testing the changes is using [[A/B Testing]].
*A liberal [[Hunting]] algorithm will seek further afield for better solutions.
+
*A conservative [[Hunting]] algorithm will seek only solutions near to the current one. It is very likely to find local maximum values and never look far from that maximum.
 +
*A liberal [[Hunting]] algorithm will seek further afield for better solutions. In learning this means that some measure of randomness needs to be introduced.<ref name=hinton>Geoffrey Hinton quoted in Neil Savage, ''Neural Net Worth.'' '''CACM Vol 62''' no 6 p. 12 (2019-06)</ref>
 
*As in politics, a conservative algorithm will result in less [[Disruption]] and will often get trapped by local maxima, just as a hill climbing algorithm will.
 
*As in politics, a conservative algorithm will result in less [[Disruption]] and will often get trapped by local maxima, just as a hill climbing algorithm will.
*As in politics, a liberal solution will be needed when the conservative solutions diverge to far from the higher level goals by gradual [[Evolution]].
+
*As in politics, a liberal solution will be needed when the conservative solutions diverge too far from the higher level goals by gradual [[Evolution]].
 
*As in politics, a conservative algorithm will eventually result in the destruction of the system by external [[Disruption]] or revolution from within.
 
*As in politics, a conservative algorithm will eventually result in the destruction of the system by external [[Disruption]] or revolution from within.
*As in politics, a shifting between liberal and conservative algorithm will provide adaption of the solution without unactable [[Disruption]].
+
*As in politics, a constant (but acceptable) shifting between liberal and conservative algorithms will provide adaption of the solution without unacceptable [[Disruption]].
  
 
==References==
 
==References==
Line 20: Line 22:
  
 
[[Category:Glossary]]
 
[[Category:Glossary]]
 +
[[Category:User Experience]]

Latest revision as of 13:27, 29 May 2019

Full Title or Meme

A process (usually expressed as an algorithm) for seeking a better solution.

Context

The process of improving the algorithms used in Identity Management is an on-going effort to select and evaluate changes to the process used. Bayesian Identity Proofing provides the means for evaluating a Trust Vector of Attributes and Identifiers as a series of authentication and verification steps to be validated. Such algorithms are typically improved by selecting changes and then testing them to assure that both the Assurance of the data collected as well as the User Experience has improved. In other words, this is about Evolution. J. B. S. Haldane was the first to create "A Mathematical Theory of Natural and Artificial Selection"[1]. He required a determination of two variables (1) the intensity of selection and (2) The rate at which the old methods are removed from circulation. These same method can be used today in the process of tuning the Hunting algorithm to create the most rapid change possible that both increases the quality of the algorithm without destroying the viability of the Entity using the algorithm all while still thriving in the constantly changing Ecosystem.

Problems

  • Ensuring that any solution remains optimal by continually Hunting for better ones only near the current one is likely to trapped in a sub-optimal local maximum value that could be improved by looking further in the range space of possible solutions.
  • Geoffrey Hinton put it this way[2]
    You want to be good at doing the things you haven't yet seen, things that might be somewhat different from the training data.

Solutions

These are some of the considerations to be used for determining (1) the rate and complexity of the changes introduced, and (2) the rate at which the older algorithms are retired. With the assumption that the Web Site testing the changes is using A/B Testing.

  • A conservative Hunting algorithm will seek only solutions near to the current one. It is very likely to find local maximum values and never look far from that maximum.
  • A liberal Hunting algorithm will seek further afield for better solutions. In learning this means that some measure of randomness needs to be introduced.[2]
  • As in politics, a conservative algorithm will result in less Disruption and will often get trapped by local maxima, just as a hill climbing algorithm will.
  • As in politics, a liberal solution will be needed when the conservative solutions diverge too far from the higher level goals by gradual Evolution.
  • As in politics, a conservative algorithm will eventually result in the destruction of the system by external Disruption or revolution from within.
  • As in politics, a constant (but acceptable) shifting between liberal and conservative algorithms will provide adaption of the solution without unacceptable Disruption.

References

  1. J. B. S. Haldane, A Mathematical Theory of Natural and Artificial Selection. a ten part essay, the first is published by Cambridge Philosophical Society 23 pp 19-41 (1924) https://web.archive.org/web/20041123231709/http://www.blackwellpublishing.com:80/ridley/classictexts/haldane1.pdf
  2. 2.0 2.1 Geoffrey Hinton quoted in Neil Savage, Neural Net Worth. CACM Vol 62 no 6 p. 12 (2019-06)