Difference between revisions of "Artificial Intelligence"
From MgmtWiki
(→Problems) |
(→Problems) |
||
Line 11: | Line 11: | ||
* Humans seem to not be able to use [[Intelligent Design]] alone to fashion [[Artificial Intelligence]]. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms. | * Humans seem to not be able to use [[Intelligent Design]] alone to fashion [[Artificial Intelligence]]. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms. | ||
* Training an [[Artificial Intelligence]] with human behaviors will result in unacceptable behavior by the [[Artificial Intelligence]]. | * Training an [[Artificial Intelligence]] with human behaviors will result in unacceptable behavior by the [[Artificial Intelligence]]. | ||
− | ** Microsoft released Tay,<ref>Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/</ref> a web robot (bot) to respond to tweets and chats<ref>Sarah Parez, ''Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism.'' (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/</ref> in May 2016. The result was | + | ** Microsoft released Tay,<ref>Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/</ref> a web robot (bot) to respond to tweets and chats<ref>Sarah Parez, ''Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism.'' (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/</ref> in May 2016. The result was disastrous as Tay learned to be racist from its all-too-human trainers and was shut down in days. |
+ | ** Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; the latest from the president, Donald Trump accusing it of favoritism to leftist causes.<ref>Farhad Manjoo, ''Search Bias, Blind Spots And Google.'' Researcher Safia U. Noble has written a book<ref>Algorithms of Oppression (2018)</ref> mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech? | ||
==Solutions== | ==Solutions== |
Revision as of 09:24, 31 August 2018
Full Title or Meme
Intelligence without human vindictiveness or human compassion.
Context
- Artificial General Intelligence redirects here. In this context Intelligence relates primarily to Identity Knowledge, but that has many general aspects.
- Some people think that Artificial Intelligence must needs be cold or sterile, but we have plenty of evidence that is not so.
- Elisa
- Alexa
Problems
- Humans seem to not be able to use Intelligent Design alone to fashion Artificial Intelligence. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms.
- Training an Artificial Intelligence with human behaviors will result in unacceptable behavior by the Artificial Intelligence.
- Microsoft released Tay,[1] a web robot (bot) to respond to tweets and chats[2] in May 2016. The result was disastrous as Tay learned to be racist from its all-too-human trainers and was shut down in days.
- Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; the latest from the president, Donald Trump accusing it of favoritism to leftist causes.Cite error: Closing
</ref>
missing for<ref>
tag mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech?
Solutions
References
- ↑ Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
- ↑ Sarah Parez, Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism. (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/