Difference between revisions of "Explanation"
From MgmtWiki
(→Problems) |
(→Problems) |
||
| Line 6: | Line 6: | ||
==Problems== | ==Problems== | ||
* It is well known that users are likely to leave checkboxes as they were if that speeds up the process. For example, both the Dutch and the Belgium driver's license show a check box for organ donation. The Dutch box is initially unchecked, the Belgian box is initially checked. The Dutch acceptance of organ donation is 27%, the Belgium acceptance in 98%. When people were asked to make an [[Explanation]] their choice, they typically were able to make a cogent explanation of their "choice".<ref>Justin Gregg, ''If Nietzsche were a Narwhal'' Little Brown (2022-08) ISBN 978-0316388061</ref> What possible value could that explanation be to understanding? | * It is well known that users are likely to leave checkboxes as they were if that speeds up the process. For example, both the Dutch and the Belgium driver's license show a check box for organ donation. The Dutch box is initially unchecked, the Belgian box is initially checked. The Dutch acceptance of organ donation is 27%, the Belgium acceptance in 98%. When people were asked to make an [[Explanation]] their choice, they typically were able to make a cogent explanation of their "choice".<ref>Justin Gregg, ''If Nietzsche were a Narwhal'' Little Brown (2022-08) ISBN 978-0316388061</ref> What possible value could that explanation be to understanding? | ||
| + | * For ethical and safety reason, users often expect an explanation of how the networks came to a conclusion in medical, financial, legal and military applications.<ref>Neurosymbolic AI'', '''CACM 65''' No 10 pp 11-12 (2022-10)</ref> | ||
| + | ==Solutions== | ||
| + | * The only possible explanation would be one that was made at the same time as the choice, but even then, as Malcom Gladwell explained in his book ''Blink''<ref>Malcom Gladwell explained in his book ''Blink''</ref> the choice might have been make very early and the explanation later. | ||
| + | * Yann LeCun, chief AI scientist considered problems | ||
==Reverences== | ==Reverences== | ||
[[Category: Artificial Intelligence]] | [[Category: Artificial Intelligence]] | ||
Revision as of 12:12, 2 October 2022
Full Title or Meme
Artificial Intelligence engines are being asked for an Explanation of how they came to a decision. Does that make any sense?
Context
Problems
- It is well known that users are likely to leave checkboxes as they were if that speeds up the process. For example, both the Dutch and the Belgium driver's license show a check box for organ donation. The Dutch box is initially unchecked, the Belgian box is initially checked. The Dutch acceptance of organ donation is 27%, the Belgium acceptance in 98%. When people were asked to make an Explanation their choice, they typically were able to make a cogent explanation of their "choice".[1] What possible value could that explanation be to understanding?
- For ethical and safety reason, users often expect an explanation of how the networks came to a conclusion in medical, financial, legal and military applications.[2]
Solutions
- The only possible explanation would be one that was made at the same time as the choice, but even then, as Malcom Gladwell explained in his book Blink[3] the choice might have been make very early and the explanation later.
- Yann LeCun, chief AI scientist considered problems
Reverences
- ↑ Justin Gregg, If Nietzsche were a Narwhal Little Brown (2022-08) ISBN 978-0316388061
- ↑ Neurosymbolic AI, CACM 65 No 10 pp 11-12 (2022-10)
- ↑ Malcom Gladwell explained in his book Blink