Difference between revisions of "Artificial Intelligence"

From MgmtWiki
Jump to: navigation, search
(Problems)
(Problems)
Line 12: Line 12:
 
* Training an [[Artificial Intelligence]] with human behaviors will result in unacceptable behavior by the [[Artificial Intelligence]].
 
* Training an [[Artificial Intelligence]] with human behaviors will result in unacceptable behavior by the [[Artificial Intelligence]].
 
** Microsoft released Tay,<ref>Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/</ref> a web robot (bot) to respond to tweets and chats<ref>Sarah Parez, ''Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism.'' (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/</ref> in May 2016. The result was disastrous as Tay learned to be racist from its all-too-human trainers and was shut down in days.
 
** Microsoft released Tay,<ref>Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/</ref> a web robot (bot) to respond to tweets and chats<ref>Sarah Parez, ''Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism.'' (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/</ref> in May 2016. The result was disastrous as Tay learned to be racist from its all-too-human trainers and was shut down in days.
** Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; the latest from the president, Donald Trump accusing it of favoritism to leftist causes.<ref>Farhad Manjoo, ''Search Bias, Blind Spots And Google.'' (2018-08-31) New York Times p. B1</ref> Researcher Safiya U. Noble has written a book<ref>Safiya U. Noble, ''Algorithms of Oppression: How Search Engines Reinforce Racism.'' (2018-02-20) ISBN 978-1479849949</ref> mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech?
+
** Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; the latest from the president, Donald Trump accusing it of favoritism to leftist causes.<ref>Farhad Manjoo, ''Search Bias, Blind Spots And Google.'' (2018-08-31) New York Times p. B1</ref> Researcher Safiya U. Noble has written a book<ref>Safiya U. Noble, ''Algorithms of Oppression: How Search Engines Reinforce Racism.'' (2018-02-20) ISBN 978-1479849949</ref> mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech or freedom of religion?
 
** The page [[Right to be Forgotten]] describes an effort in Europe to teach the search engines to with-hold information about people that they don't like by collecting all of the information that they don't like. Be careful what you ask your [[Artificial Intelligence]] to do for you; it might just spill the beans some day, perhaps under court order.
 
** The page [[Right to be Forgotten]] describes an effort in Europe to teach the search engines to with-hold information about people that they don't like by collecting all of the information that they don't like. Be careful what you ask your [[Artificial Intelligence]] to do for you; it might just spill the beans some day, perhaps under court order.
 
** In the movie "Blade Runner 2049" the protagonist's AI girl friend asks to have her memory wiped so that the police cannot compel her to testify against him. One would hope that our future AIs will be that compassionate towards us; but that will probably be illegal.
 
** In the movie "Blade Runner 2049" the protagonist's AI girl friend asks to have her memory wiped so that the police cannot compel her to testify against him. One would hope that our future AIs will be that compassionate towards us; but that will probably be illegal.

Revision as of 10:43, 31 August 2018

Full Title or Meme

Intelligence without human vindictiveness or human compassion.

Context

  • Artificial General Intelligence redirects here. In this context Intelligence relates primarily to Identity Knowledge, but that has many general aspects.
  • Some people think that Artificial Intelligence must needs be cold or sterile, but we have plenty of evidence that is not so.
    • Elisa
    • Alexa

Problems

  • Humans seem to not be able to use Intelligent Design alone to fashion Artificial Intelligence. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms.
  • Training an Artificial Intelligence with human behaviors will result in unacceptable behavior by the Artificial Intelligence.
    • Microsoft released Tay,[1] a web robot (bot) to respond to tweets and chats[2] in May 2016. The result was disastrous as Tay learned to be racist from its all-too-human trainers and was shut down in days.
    • Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; the latest from the president, Donald Trump accusing it of favoritism to leftist causes.[3] Researcher Safiya U. Noble has written a book[4] mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech or freedom of religion?
    • The page Right to be Forgotten describes an effort in Europe to teach the search engines to with-hold information about people that they don't like by collecting all of the information that they don't like. Be careful what you ask your Artificial Intelligence to do for you; it might just spill the beans some day, perhaps under court order.
    • In the movie "Blade Runner 2049" the protagonist's AI girl friend asks to have her memory wiped so that the police cannot compel her to testify against him. One would hope that our future AIs will be that compassionate towards us; but that will probably be illegal.

Solutions

References

  1. Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
  2. Sarah Parez, Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism. (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/
  3. Farhad Manjoo, Search Bias, Blind Spots And Google. (2018-08-31) New York Times p. B1
  4. Safiya U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism. (2018-02-20) ISBN 978-1479849949