Difference between revisions of "Artificial Intelligence"

From MgmtWiki
Jump to: navigation, search
(Effective Altruism)
(Effective Altruism)
Line 72: Line 72:
 
Under the concept that those with wealth should be careful that the resources that the contribute back to society have a positive impact. In other words, they want to control the way their money is used. An effort to impact AI in this way is underway. <ref>Timnit Gerbu ''Effective Altruism Is Pushing a Dangerous Brand of "AI Safety"'' Wired (2022-11-30) https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/</ref><blockquote>EA is defined by the Center for Effective Altruism as “an intellectual project, using evidence and reason to figure out how to benefit others as much as possible.” And “evidence and reason” have led many EAs to conclude that the most pressing problem in the world is preventing an apocalypse where an artificially generally intelligent being (AGI) created by humans exterminates us. To prevent this apocalypse, Some of the billionaires who have committed significant funds to this goal include Elon Musk, Vitalik Buterin, Ben Delo, Jaan Tallinn, Peter Thiel, Dustin Muskovitz, and Sam Bankman-Fried, who was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform. As a result, all of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on “beneficial artificial general intelligence” that will bring techno utopia for humanity. This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it. </blockquote>
 
Under the concept that those with wealth should be careful that the resources that the contribute back to society have a positive impact. In other words, they want to control the way their money is used. An effort to impact AI in this way is underway. <ref>Timnit Gerbu ''Effective Altruism Is Pushing a Dangerous Brand of "AI Safety"'' Wired (2022-11-30) https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/</ref><blockquote>EA is defined by the Center for Effective Altruism as “an intellectual project, using evidence and reason to figure out how to benefit others as much as possible.” And “evidence and reason” have led many EAs to conclude that the most pressing problem in the world is preventing an apocalypse where an artificially generally intelligent being (AGI) created by humans exterminates us. To prevent this apocalypse, Some of the billionaires who have committed significant funds to this goal include Elon Musk, Vitalik Buterin, Ben Delo, Jaan Tallinn, Peter Thiel, Dustin Muskovitz, and Sam Bankman-Fried, who was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform. As a result, all of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on “beneficial artificial general intelligence” that will bring techno utopia for humanity. This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it. </blockquote>
  
A better answer to AI might be to allow those who work directly with underserved people. The best way to measure the success of a society is to see how it treats the least of its members.<ref>Karen Hao ''A new vision of artificial intelligence for the people'' Technology Review (2022-04-22)</ref><blockquote>This story is the fourth and final part of MIT Technology Review’s series on AI colonialism, the idea that artificial intelligence is creating a new colonial world order. The project is a radical departure from the way the AI industry typically operates. Over the last decade, AI researchers have pushed the field to new limits with the dogma “More is more”: Amass more data to produce bigger models (algorithms trained on said data) to produce better results. The approach has led to remarkable breakthroughs—but to costs as well. Companies have relentlessly mined people for their faces, voices, and behaviors to enrich bottom lines. And models built by averaging data from entire populations have sidelined minority and marginalized communities even as they are disproportionately subjected to the technology’s impacts. Over the years, a growing chorus of experts have argued that these impacts are repeating the patterns of colonial history. Global AI development, they say, is impoverishing communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires.</blockquote>
+
A better answer to AI might be to allow those who work directly with underserved people. The best way to measure the success of a society is to see how it treats the least of its members.<ref>Karen Hao ''A new vision of artificial intelligence for the people'' Technology Review (2022-04-22)</ref><blockquote>This story is the fourth and final part of MIT Technology Review’s series on AI colonialism, the idea that artificial intelligence is creating a new colonial world order. The project is a radical departure from the way the AI industry typically operates. Over the last decade, AI researchers have pushed the field to new limits with the dogma “More is more”: Amass more data to produce bigger models (algorithms trained on said data) to produce better results. The approach has led to remarkable breakthroughs—but to costs as well. Companies have relentlessly mined people for their faces, voices, and behaviors to enrich bottom lines. And models built by averaging data from entire populations have sidelined minority and marginalized communities even as they are disproportionately subjected to the technology’s impacts. Over the years, a growing chorus of experts have argued that these impacts are '''repeating the patterns of colonial history.''' Global AI development, they say, is impoverishing communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires.</blockquote>
  
 
==Regulation==
 
==Regulation==

Revision as of 10:18, 4 December 2022

Full Title or Meme

Intelligence without human vindictiveness or human compassion.

Context

  • Artificial General Intelligence redirects here. In this context Intelligence relates primarily to Identity Knowledge, but that has many general aspects.
  • Some people think that Artificial Intelligence must needs be cold or sterile, but we have plenty of evidence that is not so.
    • Elisa
    • Alexa

Problems

  • Humans seem to not be able to use Intelligent Design alone to fashion Artificial Intelligence. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms.
  • Training an Artificial Intelligence with human behaviors will result in unacceptable behavior by the Artificial Intelligence.
    • Microsoft released Tay,[1] a web robot (bot) to respond to tweets and chats[2] in May 2016. The result was disastrous as Tay learned to be racist from its all-too-human trainers and was shut down in days.
    • Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; the latest from the president, Donald Trump accusing it of favoritism to leftist causes.[3] Researcher Safiya U. Noble has written a book[4] mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech or freedom of religion?
    • The page Right to be Forgotten describes an effort in Europe to teach the search engines to with-hold information about people that they don't like by collecting all of the information that they don't like. Be careful what you ask your Artificial Intelligence to do for you; it might just spill the beans some day, perhaps under court order.
    • In the movie "Blade Runner 2049" the protagonist's AI girl friend asks to have her memory wiped so that the police cannot compel her to testify against him. One would hope that our future AIs will be that compassionate towards us; but that kind of compassionate behavior will probably be illegal.
  • Giving a AI a goal may result in unexpected and undesired solutions.
    • Bruse Schneier described[5] AI to be Like crafty genies. They will grant our wishes and then hack them, exploiting our social, political, and economic systems like never before.
  • The development of full Artificial Intelligence could spell the end of the human race. - Stephan Hawkin BBC Interview 2014

Explaining AI

Most users want to know WHY a decision was make or an action was performed. The article "Solving for Why"[6] is a focus on causality to help researchers overcome shortcomings that have bedeviled more traditional approaches to AI. It is not enough to predict things, a good AI will need to solves things and explain why the choice was make if the decision is to have any support by the human population. People will ask if the decision was fair and will second guess the choices make by the AI unless they understand and accept the decision. Today AI decision are fragile, a slight change in the environment may have drastic changes in outcomes. This may well be inexplicable to humans. It will be insufficient to convince experts in the field about the choices made. It is apparent in 2022 that expert opinions about COVID have not been very popular. Even more evidence will be required of a AI decision. This will be particularly true when the decision is costly in any way. Humans like to have choices, which may require the AI to offer choices. That will have the benefit of giving the humans some feeling that they are in control. But beware of "Hobson's choice". That was a stable master at Oxford that was required to give the students a choice of horses, but selected the options so that there was no real choice.

Paradox

Moravec’s Paradox describes our experience that it is hard to teach machines to do things that are easy for most humans (walking, for example) but comparatively easy to teach them things that are challenging for most humans (chess comes to mind).

Polanyi’s Paradox is named for the philosopher Michael Polanyi (1891-1976), who developed the concept of “tacit knowledge”: Central to Michael Polanyi’s thinking was the belief that creative acts (especially acts of discovery) are shot-through or charged with strong personal feelings and commitments (hence the title of his most famous work Personal Knowledge). Arguing against the then dominant position that science was somehow value-free, Michael Polanyi sought to bring into creative tension a concern with reasoned and critical interrogation with other, more ‘tacit’, forms of knowing. Polanyi’s argument was that the informed guesses, hunches and imaginings that are part of exploratory acts are motivated by what he describes as ‘passions’. They might well be aimed at discovering ‘truth’, but they are not necessarily in a form that can be stated in propositional or formal terms. As Michael Polanyi (1967: 4) wrote in The Tacit Dimension, we should start from the fact that ‘we can know more than we can tell‘.[7]

  • For synonyms to tacit knowledge see the wiki page on Common Sense.

Fairness

  • What is called "Algorithmic Bias" today is just the result of training any Artificial Intelligence using any data based on human behavior.
  • We now expect more Fairness from our Artificial Intelligence as we expect from normal humans.
  • "The burgeoning field of algorithmic fairness, part of the much broader field of responsible computing, is aiming to remedy the situation"[8] which is the result of using available data.
  • Scientists have found that the English word "fairness" has many meanings, and do they would prefer terms like technical disparities, that they can define precisely. The problem with that is that what society demands is "fairness". Which means that scientists could never deliver what society demands.
  • As was demonstrated in the COMPASS program to predict recidivism in convicts, it is not possible create a single solution that satisfies more than one metric for "fairness".
  • Explainable Fairness means can you explain it to a judge so that they can make a determination. -- Cathy O'Neil ORCAA

Solutions

The wiki page on Trust describes how people can get into the habit of trusting business process or technology process that really do not deserve to be trusted. The following is a listing of some of the technological solutions. The question about whether an Artificial Intelligence is trustworthy is yet to be addressed.

Neural Nets

Still the most popular, they appear to have reached a limit in what they can explain about human intelligence. Perhaps we need to add compassion and Common Sense back in.

The theory of neural nets has been documented in Quanta [9] A description of the Turing award to the winners for applying neural nets to solving problem (as opposed to explaining intelligence) was summarized by Neil Savage[10] who specifically talks about the need for more involvement of humanities in long term solutions.

Libratus

Libratus did not use expert domain knowledge, or human data that are specific to [the game domain]. Rather, the AI was able to analyze the game's rules and devise its own strategy. The technology thus could be applied to any number of imperfect-information games. Such hidden information is ubiquitous in real-world strategic interactions, including business negotiation, cybersecurity, finance, strategic pricing and military applications.[11]

Alpha Zero

After successfully defeating the best human GO play, the Google Deep Mind project Alpha Go was generalized into a far simpler version Alpha Zero[12] that was just given the rules of the game, whether Go or Chess, and allowed to play against another instance of itself until it had mastered the subject to be able to beat any existing computer model. Garry Kasparov, the world chess said that Alpha Go developed a style of play that "reflects the truth" of the game rather than "the priorities and prejudices of programmers." That pretty much ends the discussion about the superiority of "Intelligent Design" which turns out to be just another myth that humans tell about themselves and their anthropomorphic gods.

Open AI

Open AI Is a research institute founded by Elon Musk and Sam Altman to make AI results available to everyone. Their finding (which echoed that of Deep Mind at Google) in early 2019 was that this is a very bad idea. Like any technology it can ready be turned to evil purposes as reported in Wired.[13] This finding should certain give pause to any company that seeks to use AI for their own purposes. It is too easy for AI to be turned to the purposed of evil intent. Sam Altman wrote an essay Moore's Law for Everything[14] that was review by Ezra Klein.[15]

The Smart Agent

Now that an A.I. can apply reason to problems, it should be expected that such agents on on the horizon (in 2021).

In Healthcare

  • WHO Releases AI Guidelines for Health Government Technology 2021-07-12
    A new report from the World Health Organization (WHO) offers guidance for the ethical use of artificial intelligence (AI) in the health sector. The six primary principles for the use of AI as set forth in the report are to protect autonomy; promote human well-being, safety, and the public interest; ensure transparency, explainability, and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote responsive and sustainable AI. These principles are intended as a foundation for AI stakeholders, including governments, developers, and society. The report, Ethics and Governance of Artificial Intelligence for Health, finds public- and private-sector collaboration necessary to ensure AI’s accountability.

Back to the Future

Brian Christian described "The Alignment Problem"[16] as the difficulty that AI has with aligning the results provided by the AI with society's goals. It is worse than you might imagine. The results from AI as of 2020 have not proven to be appropriate. This is, in part, because they are trained on data from social systems that are also deemed to be inappropriate. What most researchers are doing from this point is trying to determine how natual intelligence develops and feeding that back into the AI systems.

The first person to try to understand "The Construction of Reality in the Child' was Jean Piaget[17]. The first line of the book is "To understand how the budding intelligence constructs the external world, we must ask whether the child, in its first months of life, conceives and perceives things as we do, as object that have substance, that are permanent and of constant dimensions." Now that NMR brain scanners can image a thought within an infant's mind[18], we can actually imagine that the brain really is just the Chinese Room that so perplexed Searle (above)[19]

And finally, the AI research community is recognizing the turn of AI towards Natural Science and is not too happy about that?[20] Kambhampati reports that "Indeed, in a 2003 lecture, Turing laureate Lesle Lamport sounded alarms about the very possibility of the future of computing belong to biology rather than logic, saying it will lead us to living in a world of homeopathy and faith healing[21] Perhaps we should go back to one of the authors of the ultimate book on logic, the Principia Mathematica Alfred North Whitehead[22] who said "Metaphysical categories are not dogmatic statements of the obvious, they are tentative formulation of the ultimate generalities. If we consider any scheme of philosophic categories as one complex assertion, and apply it to the logician's alternative, true or false, the answer must be that the scheme is false. The same answer must be given to like question respecting the existing formulate principles of any science."[23]

Effective Altruism

Under the concept that those with wealth should be careful that the resources that the contribute back to society have a positive impact. In other words, they want to control the way their money is used. An effort to impact AI in this way is underway. [24]
EA is defined by the Center for Effective Altruism as “an intellectual project, using evidence and reason to figure out how to benefit others as much as possible.” And “evidence and reason” have led many EAs to conclude that the most pressing problem in the world is preventing an apocalypse where an artificially generally intelligent being (AGI) created by humans exterminates us. To prevent this apocalypse, Some of the billionaires who have committed significant funds to this goal include Elon Musk, Vitalik Buterin, Ben Delo, Jaan Tallinn, Peter Thiel, Dustin Muskovitz, and Sam Bankman-Fried, who was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform. As a result, all of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on “beneficial artificial general intelligence” that will bring techno utopia for humanity. This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it.
A better answer to AI might be to allow those who work directly with underserved people. The best way to measure the success of a society is to see how it treats the least of its members.[25]
This story is the fourth and final part of MIT Technology Review’s series on AI colonialism, the idea that artificial intelligence is creating a new colonial world order. The project is a radical departure from the way the AI industry typically operates. Over the last decade, AI researchers have pushed the field to new limits with the dogma “More is more”: Amass more data to produce bigger models (algorithms trained on said data) to produce better results. The approach has led to remarkable breakthroughs—but to costs as well. Companies have relentlessly mined people for their faces, voices, and behaviors to enrich bottom lines. And models built by averaging data from entire populations have sidelined minority and marginalized communities even as they are disproportionately subjected to the technology’s impacts. Over the years, a growing chorus of experts have argued that these impacts are repeating the patterns of colonial history. Global AI development, they say, is impoverishing communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires.

Regulation

Antitrust law is the best-known method for controlling enterprises too large to be controlled by the government. Perhaps they can be used to control a super intelligent being of another sort. Perhaps that's all they ever are. Corporations with computers and imposed limitations.

  • President Bidin issued a Blueprint for an AI Bill of Rights (2022-10-04) problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent. Five Principles:
  1. Safe and Effective Systems - You should be protected from unsafe or ineffective systems.
  2. Algorithmic Discrimination Protections - You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  3. Data Privacy - You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  4. Notice and Explanation- You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  5. Human Alternatives, Consideration, and Fallback - You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
  • A law that goes into effect in New York City in January[26] requires companies to audit their artificial intelligence (AI) hiring systems to assess whether they incorporate racial and gender biases. The law holds hiring companies liable for any biases and could impose fines for violations, but lacks clear guidelines for the AI audit process. While the city’s Department of Consumer and Worker Protection has not offered a timeline for when it will publish rules to implement the law, some companies already are taking steps to comply. Said Anthony Habayeb of AI governance software company Monitaur Inc., "Instead of waiting for someone to tell me what to do… I built controls around these applications because I know like with any software, things can and do go wrong."

References

  1. Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
  2. Sarah Parez, Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism. (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/
  3. Farhad Manjoo, Search Bias, Blind Spots And Google. (2018-08-31) New York Times p. B1
  4. Safiya U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism. (2018-02-20) ISBN 978-1479849949
  5. Bruse Schneier, Hackers Used to Be Humans. Soon, AIs Will Hack Humanity Wired (2021-04-18) https://www.wired.com/story/opinion-hackers-used-to-be-humans-soon-ais-will-hack-humanity/?bxid=5c5b250d24c17c67f8640083
  6. Marina Kraskovsky, Solving for Why CACM 65 No 2 (2202-02) pp 11ff
  7. MARK K. SMITH, “MICHAEL POLANYI AND TACIT KNOWLEDGE” AT INFED
  8. Marina Krakovsky, Formalizing Fairness CACM 65 No. 8 (2022-08) p 11ff.
  9. Kevin Hartnett, Foundations Built for a General Theory of Neural Networks. (2019-01-31) Quanta https://www.quantamagazine.org/foundations-built-for-a-general-theory-of-neural-networks-20190131/
  10. Neil Savage, Neural Net Worth. CACM Vol 62 no 6 p. 12 (2019-06)
  11. Byron Spice, CSD's Sandholm, Brown To Receive Minsky Medal. (2018-11-06) Carnegie-Mellon News https://www.cmu.edu/news/stories/archives/2018/november/minsky-medal.html
  12. Steven Strogatz, One Giant Step for a Chess-Playing Machine. (2018-12-26) New York Times https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html
  13. Alyssa Foote, The AI Text Generator That's Too Dangerous to Make Public. (219-02-19) Wired Magazine https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/
  14. Sam Altman Moore's Law for Everything personal blog (2021) https://moores.samaltman.com/
  15. Ezra Klein Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power NY Time (2021-06-11) https://www.nytimes.com/2021/06/11/opinion/ezra-klein-podcast-sam-altman.html?action=click&module=Opinion&pgtype=Homepage
  16. Brian Christian The Alignment Problem W. W. Norton (2020-10-06) ISBN 978-0-393-86833-3
  17. Jean Piaget The Construction of Reality in the Child (original 1954) Ballentine ISBN 345-02338-2-165
  18. Rachel Fritts, How do Babies perceive the World? MIT News July/August 2021 p 24 ff
  19. James Somers, Head Space The New Yorker Dec. 6, 2021, p 30 ff.
  20. Subbarao Kambhampati, AI as (an Ersatz) Natural Science. CACM 65 No 9 (2022-09) pp 8-9
  21. Leslie Lamport, The Future of Computing: Logic or Biology (2003-07-23) https://bit.ly/3ahrsaI
  22. Alfred North Whitehead, Principia Mathematica Cambridge UP (Second edition 1927)
  23. Alfred North Whitehead, Process and Reality Free Press (1929)
  24. Timnit Gerbu Effective Altruism Is Pushing a Dangerous Brand of "AI Safety" Wired (2022-11-30) https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/
  25. Karen Hao A new vision of artificial intelligence for the people Technology Review (2022-04-22)
  26. Richard Vanderford, New York AI Bias Law Prompts Uncertainty Wall Street Journal (2022-09-21)

Other Material