Difference between revisions of "Explainer"

From MgmtWiki
Jump to: navigation, search
(Behavior Explainer)
(Behavior Explainer)
 
(One intermediate revision by the same user not shown)
Line 8: Line 8:
 
==Solutions==
 
==Solutions==
 
===Behavior Explainer===
 
===Behavior Explainer===
* One important use case is that of the ''Intelligent Synthetic Force''<ref>Glenn Taylor + 2, ''Explaining Agent Behavior'', Soar Technologies in Behavior Representation in Modeling and Simulation (BRIMS) 2006 https://soartech.com/wp-content/uploads/2021/11/VISTA-2006_BRIMS-FINAL-distribution.pdf</ref> where some sort of robot is designed to apply force for good (rescue) or ill (war fighting). In either case it is important to be able to explain the behavior of the ISF robot to those who would control it since any application of force can create harm if inappropriately applied. In this case the behavior needs to be explained to the human before the force is applied as well as afterward if the result was not the one expected. Quote from the reference above:<blockquote>Our goal is to develop the capability for ISFs to explain what they are doing and why they are doing it, in terms familiar to the user. Our approach is to develop a generic framework for building explanation capabilities that can be connected to a wide range of agent systems in a wide range of domains. This approach extends the existing VISTA toolkit (Taylor, Jones et al. 2002) to provide support for explanation, including interfaces for querying about agent behavior and interfaces for providing explanations of that behavior.</blockquote>
+
* One important use case is that of the [[Intelligent Synthetic Force]]<ref>Glenn Taylor + 2, ''Explaining Agent Behavior'', Soar Technologies in Behavior Representation in Modeling and Simulation (BRIMS) 2006 https://soartech.com/wp-content/uploads/2021/11/VISTA-2006_BRIMS-FINAL-distribution.pdf</ref> where some sort of robot is designed to apply force for good (rescue) or ill (war fighting). In either case it is important to be able to explain the behavior of the ISF robot to those who would control it since any application of force can create harm if inappropriately applied. In this case the behavior needs to be explained to the human before the force is applied as well as afterward if the result was not the one expected. Quote from the reference above:<blockquote>Our goal is to develop the capability for ISFs to explain what they are doing and why they are doing it, in terms familiar to the user. Our approach is to develop a generic framework for building explanation capabilities that can be connected to a wide range of agent systems in a wide range of domains. This approach extends the existing VISTA toolkit (Taylor, Jones et al. 2002) to provide support for explanation, including interfaces for querying about agent behavior and interfaces for providing explanations of that behavior.</blockquote>
* See the page on [[Trust]].
+
* See the page on [[Trust#Trusting_the_Machine|Trusting_the_Machine]].
  
 
==References==
 
==References==

Latest revision as of 16:07, 9 March 2022

Full Title or Meme

For Identity Management an Explainer is a report in plain language to describe what will happen or what has happened.

Context

Solutions

Behavior Explainer

  • One important use case is that of the Intelligent Synthetic Force[1] where some sort of robot is designed to apply force for good (rescue) or ill (war fighting). In either case it is important to be able to explain the behavior of the ISF robot to those who would control it since any application of force can create harm if inappropriately applied. In this case the behavior needs to be explained to the human before the force is applied as well as afterward if the result was not the one expected. Quote from the reference above:
    Our goal is to develop the capability for ISFs to explain what they are doing and why they are doing it, in terms familiar to the user. Our approach is to develop a generic framework for building explanation capabilities that can be connected to a wide range of agent systems in a wide range of domains. This approach extends the existing VISTA toolkit (Taylor, Jones et al. 2002) to provide support for explanation, including interfaces for querying about agent behavior and interfaces for providing explanations of that behavior.
  • See the page on Trusting_the_Machine.

References

  1. Glenn Taylor + 2, Explaining Agent Behavior, Soar Technologies in Behavior Representation in Modeling and Simulation (BRIMS) 2006 https://soartech.com/wp-content/uploads/2021/11/VISTA-2006_BRIMS-FINAL-distribution.pdf