Difference between revisions of "Explainer"
From MgmtWiki
(→Behavior Explainer) |
(→Behavior Explainer) |
||
(3 intermediate revisions by the same user not shown) | |||
Line 8: | Line 8: | ||
==Solutions== | ==Solutions== | ||
===Behavior Explainer=== | ===Behavior Explainer=== | ||
− | * One important use case is that of the | + | * One important use case is that of the [[Intelligent Synthetic Force]]<ref>Glenn Taylor + 2, ''Explaining Agent Behavior'', Soar Technologies in Behavior Representation in Modeling and Simulation (BRIMS) 2006 https://soartech.com/wp-content/uploads/2021/11/VISTA-2006_BRIMS-FINAL-distribution.pdf</ref> where some sort of robot is designed to apply force for good (rescue) or ill (war fighting). In either case it is important to be able to explain the behavior of the ISF robot to those who would control it since any application of force can create harm if inappropriately applied. In this case the behavior needs to be explained to the human before the force is applied as well as afterward if the result was not the one expected. Quote from the reference above:<blockquote>Our goal is to develop the capability for ISFs to explain what they are doing and why they are doing it, in terms familiar to the user. Our approach is to develop a generic framework for building explanation capabilities that can be connected to a wide range of agent systems in a wide range of domains. This approach extends the existing VISTA toolkit (Taylor, Jones et al. 2002) to provide support for explanation, including interfaces for querying about agent behavior and interfaces for providing explanations of that behavior.</blockquote> |
+ | * See the page on [[Trust#Trusting_the_Machine|Trusting_the_Machine]]. | ||
==References== | ==References== |
Latest revision as of 16:07, 9 March 2022
Full Title or Meme
For Identity Management an Explainer is a report in plain language to describe what will happen or what has happened.
Context
- Explainers exist as documents that tell users what the purpose and impact of a technical change will effect. For example see when are explainers needed for spec issues in Google docs.
- Explainers are needed either before or after the action of an Artificial Intelligence to allow users to trust the AI.
Solutions
Behavior Explainer
- One important use case is that of the Intelligent Synthetic Force[1] where some sort of robot is designed to apply force for good (rescue) or ill (war fighting). In either case it is important to be able to explain the behavior of the ISF robot to those who would control it since any application of force can create harm if inappropriately applied. In this case the behavior needs to be explained to the human before the force is applied as well as afterward if the result was not the one expected. Quote from the reference above:
Our goal is to develop the capability for ISFs to explain what they are doing and why they are doing it, in terms familiar to the user. Our approach is to develop a generic framework for building explanation capabilities that can be connected to a wide range of agent systems in a wide range of domains. This approach extends the existing VISTA toolkit (Taylor, Jones et al. 2002) to provide support for explanation, including interfaces for querying about agent behavior and interfaces for providing explanations of that behavior.
- See the page on Trusting_the_Machine.
References
- ↑ Glenn Taylor + 2, Explaining Agent Behavior, Soar Technologies in Behavior Representation in Modeling and Simulation (BRIMS) 2006 https://soartech.com/wp-content/uploads/2021/11/VISTA-2006_BRIMS-FINAL-distribution.pdf