Difference between revisions of "Artificial Intelligence"

From MgmtWiki
Jump to: navigation, search
(Problems)
(Problems)
Line 11: Line 11:
 
* Humans seem to not be able to use [[Intelligent Design]] alone to fashion [[Artificial Intelligence]]. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms.
 
* Humans seem to not be able to use [[Intelligent Design]] alone to fashion [[Artificial Intelligence]]. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms.
 
* Training an [[Artificial Intelligence]] with human behaviors will result in unacceptable behavior by the [[Artificial Intelligence]].
 
* Training an [[Artificial Intelligence]] with human behaviors will result in unacceptable behavior by the [[Artificial Intelligence]].
** Microsoft released Tay,<ref>Peter Lee, Learning from Tay’s introduction. (2016-5-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/</ref> a web robot (bot) to respond to tweets and chats<ref>Sarah Parez, ''Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism.'' (218-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/</ref> in May 2016. The result was disterous as Tay learned to be racist from its human trainers and was shut down in days.
+
** Microsoft released Tay,<ref>Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/</ref> a web robot (bot) to respond to tweets and chats<ref>Sarah Parez, ''Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism.'' (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/</ref> in May 2016. The result was disterous as Tay learned to be racist from its human trainers and was shut down in days.
  
 
==Solutions==
 
==Solutions==

Revision as of 10:15, 31 August 2018

Full Title or Meme

Intelligence without human vindictiveness or human compassion.

Context

  • Artificial General Intelligence redirects here. In this context Intelligence relates primarily to Identity Knowledge, but that has many general aspects.
  • Some people think that Artificial Intelligence must needs be cold or sterile, but we have plenty of evidence that is not so.
    • Elisa
    • Alexa

Problems

  • Humans seem to not be able to use Intelligent Design alone to fashion Artificial Intelligence. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms.
  • Training an Artificial Intelligence with human behaviors will result in unacceptable behavior by the Artificial Intelligence.
    • Microsoft released Tay,[1] a web robot (bot) to respond to tweets and chats[2] in May 2016. The result was disterous as Tay learned to be racist from its human trainers and was shut down in days.

Solutions

References

  1. Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
  2. Sarah Parez, Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism. (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/