Difference between revisions of "Artificial Intelligence"

From MgmtWiki
Jump to: navigation, search
(Neural Nets)
(Other Material)
 
(24 intermediate revisions by the same user not shown)
Line 14: Line 14:
 
** Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; the latest from the president, Donald Trump accusing it of favoritism to leftist causes.<ref>Farhad Manjoo, ''Search Bias, Blind Spots And Google.'' (2018-08-31) New York Times p. B1</ref> Researcher Safiya U. Noble has written a book<ref>Safiya U. Noble, ''Algorithms of Oppression: How Search Engines Reinforce Racism.'' (2018-02-20) ISBN 978-1479849949</ref> mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech or freedom of religion?
 
** Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; the latest from the president, Donald Trump accusing it of favoritism to leftist causes.<ref>Farhad Manjoo, ''Search Bias, Blind Spots And Google.'' (2018-08-31) New York Times p. B1</ref> Researcher Safiya U. Noble has written a book<ref>Safiya U. Noble, ''Algorithms of Oppression: How Search Engines Reinforce Racism.'' (2018-02-20) ISBN 978-1479849949</ref> mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech or freedom of religion?
 
** The page [[Right to be Forgotten]] describes an effort in Europe to teach the search engines to with-hold information about people that they don't like by collecting all of the information that they don't like. Be careful what you ask your [[Artificial Intelligence]] to do for you; it might just spill the beans some day, perhaps under court order.
 
** The page [[Right to be Forgotten]] describes an effort in Europe to teach the search engines to with-hold information about people that they don't like by collecting all of the information that they don't like. Be careful what you ask your [[Artificial Intelligence]] to do for you; it might just spill the beans some day, perhaps under court order.
** In the movie "Blade Runner 2049" the protagonist's AI girl friend asks to have her memory wiped so that the police cannot compel her to testify against him. One would hope that our future AIs will be that compassionate towards us; but that will probably be illegal.
+
** In the movie "Blade Runner 2049" the protagonist's AI girl friend asks to have her memory wiped so that the police cannot compel her to testify against him. One would hope that our future AIs will be that compassionate towards us; but that kind of compassionate behavior will probably be illegal.
 +
* Giving a AI a goal may result in unexpected and undesired solutions.
 +
** Bruse Schneier described<ref>Bruse Schneier, ''Hackers Used to Be Humans. Soon, AIs Will Hack Humanity'' Wired (2021-04-18) https://www.wired.com/story/opinion-hackers-used-to-be-humans-soon-ais-will-hack-humanity/?bxid=5c5b250d24c17c67f8640083</ref> AI to be Like crafty genies. They will grant our wishes and then hack them, exploiting our social, political, and economic systems like never before.
  
 
==Solutions==
 
==Solutions==
Line 20: Line 22:
 
===Neural Nets===
 
===Neural Nets===
  
Still the most popular, they appear today to be a dead end.
+
Still the most popular, they appear to have reached a limit in what they can explain about human intelligence. Perhaps we need to add compassion and common sense back in.
 +
 
 +
The theory of neural nets has been documented in Quanta <ref>Kevin Hartnett, ''Foundations Built for a General Theory of Neural Networks.'' (2019-01-31) Quanta https://www.quantamagazine.org/foundations-built-for-a-general-theory-of-neural-networks-20190131/</ref> A description of the Turing award to the winners for applying neural nets to solving problem (as opposed to explaining intelligence) was summarized by Neil Savage<ref>Neil Savage, ''Neural Net Worth.'' '''CACM Vol 62''' no 6 p. 12 (2019-06)</ref> who specifically talks about the need for more involvement of humanities in long term solutions.
  
 
===Libratus===
 
===Libratus===
  
 
Libratus did not use expert domain knowledge, or human data that are specific to [the game domain]. Rather, the AI was able to analyze the game's rules and devise its own strategy. The technology thus could be applied to any number of imperfect-information games. Such hidden information is ubiquitous in real-world strategic interactions, including business negotiation, cybersecurity, finance, strategic pricing and military applications.<ref>Byron Spice, ''CSD's Sandholm, Brown To Receive Minsky Medal.'' (2018-11-06) Carnegie-Mellon News https://www.cmu.edu/news/stories/archives/2018/november/minsky-medal.html</ref>
 
Libratus did not use expert domain knowledge, or human data that are specific to [the game domain]. Rather, the AI was able to analyze the game's rules and devise its own strategy. The technology thus could be applied to any number of imperfect-information games. Such hidden information is ubiquitous in real-world strategic interactions, including business negotiation, cybersecurity, finance, strategic pricing and military applications.<ref>Byron Spice, ''CSD's Sandholm, Brown To Receive Minsky Medal.'' (2018-11-06) Carnegie-Mellon News https://www.cmu.edu/news/stories/archives/2018/november/minsky-medal.html</ref>
 +
 +
===Alpha Zero===
 +
 +
After successfully defeating the best human GO play, the Google Deep Mind project Alpha Go was generalized into a far simpler version Alpha Zero<ref>Steven Strogatz, ''One Giant Step for a Chess-Playing Machine.'' (2018-12-26) New York Times https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html</ref> that was just given the rules of the game, whether Go or Chess, and allowed to play against another instance of itself until it had mastered the subject to be able to beat any existing computer model. Garry Kasparov, the world chess said that Alpha Go developed a style of play that "reflects the truth" of the game rather than "the priorities and prejudices of programmers." That pretty much ends the discussion about the superiority of "Intelligent Design" which turns out to be just another myth that humans tell about themselves and their anthropomorphic gods.
 +
 +
===Open AI===
 +
Open AI Is a research institute founded by Elon Musk and Sam Altman to make AI results available to everyone. Their finding (which echoed that of Deep Mind at Google) in early 2019 was that this is a very bad idea. Like any technology it can ready be turned to evil purposes as reported in Wired.<ref>Alyssa Foote, ''The AI Text Generator That's Too Dangerous to Make Public.'' (219-02-19) Wired Magazine  https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/</ref> This finding should certain give pause to any company that seeks to use AI for their own purposes. It is too easy for AI to be turned to the purposed of evil intent. Sam Altman wrote an essay ''Moore's Law for Everything''<ref>Sam Altman ''Moore's Law for Everything'' personal blog (2021) https://moores.samaltman.com/</ref> that was review by Ezra Klein.<ref>Ezra Klein ''Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power'' NY Time (2021-06-11) https://www.nytimes.com/2021/06/11/opinion/ezra-klein-podcast-sam-altman.html?action=click&module=Opinion&pgtype=Homepage</ref>
 +
 +
===The Smart Agent===
 +
Now that an A.I. can apply reason to problems, it should be expected that such agents on on the horizon (in 2021).
 +
 +
===In Healthcare===
 +
* [https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Forange.hosting.lsoft.com%2Ftrk%2Fclick%3Fref%3Dznwrbbrs9_6-2be2bx32c520x081583%26&data=04%7C01%7C%7C1b0cfc8a7db9484bb50308d946e0a1e9%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637618752869025374%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=PaRyaYQadjyHbH5wP%2FruYlnD0B5rftISxc8a9dfAJeQ%3D&reserved=0 WHO Releases AI Guidelines for Health Government Technology] 2021-07-12 <blockquote>A new report from the World Health Organization (WHO) offers guidance for the ethical use of artificial intelligence (AI) in the health sector. The six primary principles for the use of AI as set forth in the report are to protect autonomy; promote human well-being, safety, and the public interest; ensure transparency, explainability, and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote responsive and sustainable AI. These principles are intended as a foundation for AI stakeholders, including governments, developers, and society. The report, Ethics and Governance of Artificial Intelligence for Health, finds public- and private-sector collaboration necessary to ensure AI’s accountability.<blockquote>
  
 
==References==
 
==References==
  
 +
<references />
 +
 +
===Other Material===
 +
* See the wiki page on [[Intelligent Goals]].
  
 
[[Category:Glossary]]
 
[[Category:Glossary]]
 
[[Category:Authentication]]
 
[[Category:Authentication]]
 +
[[Category: Artificial Intelligence]]

Latest revision as of 11:36, 16 August 2021

Full Title or Meme

Intelligence without human vindictiveness or human compassion.

Context

  • Artificial General Intelligence redirects here. In this context Intelligence relates primarily to Identity Knowledge, but that has many general aspects.
  • Some people think that Artificial Intelligence must needs be cold or sterile, but we have plenty of evidence that is not so.
    • Elisa
    • Alexa

Problems

  • Humans seem to not be able to use Intelligent Design alone to fashion Artificial Intelligence. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms.
  • Training an Artificial Intelligence with human behaviors will result in unacceptable behavior by the Artificial Intelligence.
    • Microsoft released Tay,[1] a web robot (bot) to respond to tweets and chats[2] in May 2016. The result was disastrous as Tay learned to be racist from its all-too-human trainers and was shut down in days.
    • Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; the latest from the president, Donald Trump accusing it of favoritism to leftist causes.[3] Researcher Safiya U. Noble has written a book[4] mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech or freedom of religion?
    • The page Right to be Forgotten describes an effort in Europe to teach the search engines to with-hold information about people that they don't like by collecting all of the information that they don't like. Be careful what you ask your Artificial Intelligence to do for you; it might just spill the beans some day, perhaps under court order.
    • In the movie "Blade Runner 2049" the protagonist's AI girl friend asks to have her memory wiped so that the police cannot compel her to testify against him. One would hope that our future AIs will be that compassionate towards us; but that kind of compassionate behavior will probably be illegal.
  • Giving a AI a goal may result in unexpected and undesired solutions.
    • Bruse Schneier described[5] AI to be Like crafty genies. They will grant our wishes and then hack them, exploiting our social, political, and economic systems like never before.

Solutions

Neural Nets

Still the most popular, they appear to have reached a limit in what they can explain about human intelligence. Perhaps we need to add compassion and common sense back in.

The theory of neural nets has been documented in Quanta [6] A description of the Turing award to the winners for applying neural nets to solving problem (as opposed to explaining intelligence) was summarized by Neil Savage[7] who specifically talks about the need for more involvement of humanities in long term solutions.

Libratus

Libratus did not use expert domain knowledge, or human data that are specific to [the game domain]. Rather, the AI was able to analyze the game's rules and devise its own strategy. The technology thus could be applied to any number of imperfect-information games. Such hidden information is ubiquitous in real-world strategic interactions, including business negotiation, cybersecurity, finance, strategic pricing and military applications.[8]

Alpha Zero

After successfully defeating the best human GO play, the Google Deep Mind project Alpha Go was generalized into a far simpler version Alpha Zero[9] that was just given the rules of the game, whether Go or Chess, and allowed to play against another instance of itself until it had mastered the subject to be able to beat any existing computer model. Garry Kasparov, the world chess said that Alpha Go developed a style of play that "reflects the truth" of the game rather than "the priorities and prejudices of programmers." That pretty much ends the discussion about the superiority of "Intelligent Design" which turns out to be just another myth that humans tell about themselves and their anthropomorphic gods.

Open AI

Open AI Is a research institute founded by Elon Musk and Sam Altman to make AI results available to everyone. Their finding (which echoed that of Deep Mind at Google) in early 2019 was that this is a very bad idea. Like any technology it can ready be turned to evil purposes as reported in Wired.[10] This finding should certain give pause to any company that seeks to use AI for their own purposes. It is too easy for AI to be turned to the purposed of evil intent. Sam Altman wrote an essay Moore's Law for Everything[11] that was review by Ezra Klein.[12]

The Smart Agent

Now that an A.I. can apply reason to problems, it should be expected that such agents on on the horizon (in 2021).

In Healthcare

  • WHO Releases AI Guidelines for Health Government Technology 2021-07-12
    A new report from the World Health Organization (WHO) offers guidance for the ethical use of artificial intelligence (AI) in the health sector. The six primary principles for the use of AI as set forth in the report are to protect autonomy; promote human well-being, safety, and the public interest; ensure transparency, explainability, and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote responsive and sustainable AI. These principles are intended as a foundation for AI stakeholders, including governments, developers, and society. The report, Ethics and Governance of Artificial Intelligence for Health, finds public- and private-sector collaboration necessary to ensure AI’s accountability.

References

  1. Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
  2. Sarah Parez, Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism. (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/
  3. Farhad Manjoo, Search Bias, Blind Spots And Google. (2018-08-31) New York Times p. B1
  4. Safiya U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism. (2018-02-20) ISBN 978-1479849949
  5. Bruse Schneier, Hackers Used to Be Humans. Soon, AIs Will Hack Humanity Wired (2021-04-18) https://www.wired.com/story/opinion-hackers-used-to-be-humans-soon-ais-will-hack-humanity/?bxid=5c5b250d24c17c67f8640083
  6. Kevin Hartnett, Foundations Built for a General Theory of Neural Networks. (2019-01-31) Quanta https://www.quantamagazine.org/foundations-built-for-a-general-theory-of-neural-networks-20190131/
  7. Neil Savage, Neural Net Worth. CACM Vol 62 no 6 p. 12 (2019-06)
  8. Byron Spice, CSD's Sandholm, Brown To Receive Minsky Medal. (2018-11-06) Carnegie-Mellon News https://www.cmu.edu/news/stories/archives/2018/november/minsky-medal.html
  9. Steven Strogatz, One Giant Step for a Chess-Playing Machine. (2018-12-26) New York Times https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html
  10. Alyssa Foote, The AI Text Generator That's Too Dangerous to Make Public. (219-02-19) Wired Magazine https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/
  11. Sam Altman Moore's Law for Everything personal blog (2021) https://moores.samaltman.com/
  12. Ezra Klein Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power NY Time (2021-06-11) https://www.nytimes.com/2021/06/11/opinion/ezra-klein-podcast-sam-altman.html?action=click&module=Opinion&pgtype=Homepage

Other Material