Difference between revisions of "Artificial Intelligence"
(→Open AI) |
(→Context) |
||
(199 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
==Context== | ==Context== | ||
− | * | + | * In the context of this wiki Intelligence relates primarily to [[Identity]] [[Knowledge]], but that has many general aspects. |
− | * Some people think that [[Artificial Intelligence]] must needs be cold or sterile, but we have plenty of | + | * In 1950 Turing asked "Can Machines Think".<ref>A. M. Turing, ''COMPUTING MACHINERY AND INTELLIGENCE'' 1950-10-01 https://academic.oup.com/mind/article/LIX/236/433/986238</ref> |
− | ** Elisa | + | * People have been working on artificial intelligence starting with a conference at Dartmouth College in the 1950s. There, computer scientists and mathematicians set a very ambitious goal: to develop computers that could understand language, translate it, see patterns, and more. |
− | ** Alexa | + | * Marvin Minsky was teaching undergrad courses in 1964 using the book "Computers and Thought."<ref>Edward Feigenbaum, & Julian Feldman, joint ed ''Computers and Thought'' https://archive.org/details/computersthought00feig</ref> |
+ | * in 2023 Bill Gates in "AI is about to completely change how you use computers" said<ref>Bill Gates ''AI is about to completely change how you use computers'' https://www.gatesnotes.com/AI-agents</ref> "I still love software as much today as I did when Paul Allen and I started Microsoft. But—even though it has improved a lot in the decades since then—in many ways, software is still pretty dumb.: | ||
+ | * Some people think that [[Artificial Intelligence]] must needs be cold or sterile, but we have plenty of [[Evidence]] that is not so. | ||
+ | ** Elisa was the first but [https://arstechnica.com/information-technology/2023/12/real-humans-appeared-human-63-of-the-time-in-recent-turing-test-ai-study/ it best chat gpt so I guess the turning test isn't so useful] | ||
+ | ** Alexa works on Amazon echo devices and is constantly updated so there is nothing stable to compare. | ||
+ | ** GPT was getting some press with GPT-2 included in Technology Review tagged it as a break-through technology in 2021<ref>Will Douglas, '' GPT-3'' Technology Review '''Vol 124''' No 2 (2021-03) pp. 34-35</ref>, but nothing managed to get public [[Attention]] until... {See below for more on Open AI} | ||
+ | ** Chat-GPT which was the first publicly accessible product that was not completely cringe-worthy on 2022-11-30. | ||
+ | * Is [[Artificial Intelligence]] even [https://www.linkedin.com/pulse/machine-intelligence-artificial-sean-manion-phd-oz91e/ artificial]? Or is it just yet another way to create a balance between [[Chaos and Order]]. | ||
+ | * Determining what [[Artificial Intelligence]] actually is and whether is is like human intelligence turns out to be a impossible task. Ray Kurzweil said<ref>Ray Kurzweil, ''The Singularity is Nearer''</ref><blockquote>The reason a well-defined empirical test is necessary is that ... humans have a powerful tendency to redefine whatever artificial intelligence achieves is not ''really'' so hard in hindsight. This is often referred to as the "AI effect". ... Intuitively [the hallucinations and odd answers] feels like a problem. It's tempting to think that Watson "ought" to reason like humans do. But i would propose that this is superstition. In the real world, what matters is how an intelligent being acts.</blockquote> | ||
+ | ===Origin=== | ||
+ | Over the summer of 1956 a small but illustrious group gathered at Dartmouth College in New Hampshire; it included Claude Shannon, the begetter of information theory, and Herb Simon, the only person ever to win both the Nobel Memorial Prize in Economic Sciences awarded by the Royal Swedish Academy of Sciences and the Turing Award awarded by the Association for Computing Machinery. They had been called together by a young researcher, John McCarthy, who wanted to discuss “how to make machines use language, form abstractions and concepts” and “solve kinds of problems now reserved for humans”. It was the first academic gathering devoted to what McCarthy dubbed “artificial intelligence”. And it set a template for the field’s next 60-odd years in coming up with no advances on a par with its ambitions.<ref>John McCarthy, M. Minsky, N. Rochester, C.E Shannon, ''A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence'' (1955-08) http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf</ref> | ||
+ | |||
+ | The first international conference was "Mechanisation of Thought Processes"<ref>National Physical Laboratory at Teddington,''Mechanisation of Thought Processes'' https://aitopics.org/download/aiclassics:98875BCD</ref> was held in 1958 with presentations by Minsky and John McCarthy. | ||
+ | |||
+ | ===Perceptron=== | ||
+ | Also called a McCulloch-Pitts neuron is an attempt to mimic an animal neuron, originally with a purpose built computer system. While it was first described in 1947, it was not realized until Frank Rosenblatt built is Mark I computer using perceptrons as a primary physical element. He continued to build larger machines with more layer until his early death in 1971. Minskey and Papert wrote an entire book<ref> M. Minsky & S. Papert, ''Perceptrons'' (1987-12-28) ISBN 978-0262631112</ref> where they purported to show that it was a dead-end idea which basically blocked progress on them til the 1980's. The Large Language Models that became popular in the early 2020's are based on Multi-Level Perceptrons (MLP). | ||
+ | |||
+ | ===End Game=== | ||
+ | Now i am become death, the destroyer of worlds. Quoted by Robert Oppenheimer at the completion of the Atomic bomb and in the movie "Ex Machina" from the Bhagavad Gita. | ||
+ | |||
+ | Perhaps a good question to ask is, "Can Intelligence Be Separated from the Body?"<ref>Oliver Whang, ''Can Intelligence Be Separated From the Body?'' New York Times (2023-04-11) https://www.nytimes.com/2023/04/11/science/artificial-intelligence-body-robots.html?</ref><blockquote>Maybe the mind is like a video game controller, moving the body around the world, taking it on joy rides. Or maybe the body manipulates the mind with hunger, sleepiness and anxiety, something like a river steering a canoe. Is the mind like electromagnetic waves, flickering in and out of our light-bulb bodies? Or is the mind a car on the road? A ghost in the machine?</blockquote> | ||
+ | |||
+ | Perhaps a better question to ask is, "Can Intelligence Be Separated from a persistent Identifier"?<blockquote>The [[Identifier]]s used with AI in 2023 are basically just service [[Identifier]]s. in 2023 at the beginning of the general availability of the deployments of AIs that did not embarrass their creators, was also the time that context was provided to the AI to help it understand a thread of a conversation. Before that every query was de novo, without context. But still there is not a persistent history of conversations, or any concept that that is an "I" there to establish a history, which is still the basis of trust. Perhaps we must wait until we are able to crate an AI that persists an [[Identity]], we, as individuals, will be able to establish trust with it, as an individual. </blockquote> | ||
==Problems== | ==Problems== | ||
− | * Humans seem to not be able to use [[Intelligent Design]] alone to fashion [[Artificial Intelligence]]. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms. | + | * Humans seem to not be able to use [[Intelligent Design]] alone to fashion [[Artificial Intelligence]]. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms. In other words, our understanding of the meaning of the term [[Artificial Intelligence]] has been and will continue to be evolving over time. |
− | * Training an [[Artificial Intelligence]] with human behaviors will result in unacceptable behavior by the [[Artificial Intelligence]]. | + | * Nature itself since the Big Bang seems to be based on [[Self-organization]] for its framework. |
+ | * Training an [[Artificial Intelligence]] with human behaviors will result in unacceptable behavior by the [[Artificial Intelligence]]. The most common complaint is real, or imagined, or invented, bias against one group or another. | ||
** Microsoft released Tay,<ref>Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/</ref> a web robot (bot) to respond to tweets and chats<ref>Sarah Parez, ''Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism.'' (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/</ref> in May 2016. The result was disastrous as Tay learned to be racist from its all-too-human trainers and was shut down in days. | ** Microsoft released Tay,<ref>Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/</ref> a web robot (bot) to respond to tweets and chats<ref>Sarah Parez, ''Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism.'' (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/</ref> in May 2016. The result was disastrous as Tay learned to be racist from its all-too-human trainers and was shut down in days. | ||
− | ** Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; | + | ** Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; in 2019 the president, Donald Trump, accusing it of favoritism to leftist causes.<ref>Farhad Manjoo, ''Search Bias, Blind Spots And Google.'' (2018-08-31) New York Times p. B1</ref> Researcher Safiya U. Noble has written a book<ref>Safiya U. Noble, ''Algorithms of Oppression: How Search Engines Reinforce Racism.'' (2018-02-20) ISBN 978-1479849949</ref> mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech or freedom of religion? |
** The page [[Right to be Forgotten]] describes an effort in Europe to teach the search engines to with-hold information about people that they don't like by collecting all of the information that they don't like. Be careful what you ask your [[Artificial Intelligence]] to do for you; it might just spill the beans some day, perhaps under court order. | ** The page [[Right to be Forgotten]] describes an effort in Europe to teach the search engines to with-hold information about people that they don't like by collecting all of the information that they don't like. Be careful what you ask your [[Artificial Intelligence]] to do for you; it might just spill the beans some day, perhaps under court order. | ||
− | ** In the movie "Blade Runner 2049" the protagonist's AI girl friend asks to have her memory wiped so that the police cannot compel her to testify against him. One would hope that our future AIs will be that compassionate towards us; but that compassionate behavior will probably be illegal. | + | ** In the movie "Blade Runner 2049" the protagonist's AI girl friend asks to have her memory wiped so that the police cannot compel her to testify against him. One would hope that our future AIs will be that compassionate towards us; but that kind of compassionate behavior will probably be illegal. |
+ | * The development of full [[Artificial Intelligence]] could spell the end of the human race. - Stephan Hawking BBC Interview 2014. There are lots of dystopian futures depicted by science fiction authors. Some, like the Terminator series seem unlikely. The scenario in the movie Gattaca where humans were separated into categories based on the genome has already been enabled in the Personhood credential<ref>Pymnts, ''AI’s Human Mimicry Spurs ‘Personhood Credential’ Proposal'' 2024-09-24 https://www.pymnts.com/digital-identity/2024/ais-human-mimicry-spurs-personhood-credential-proposal/</ref> which would allow machines to determine by themselves who is qualified to be classified as human. | ||
+ | * Descartes claimed "I think therefore I am". But it seems that we need to be careful about applying that idea to [[Artificial Intelligence]]. Although his philosophy is no longer considered to be valid even for humans. | ||
+ | ===Vulnerabilities=== | ||
+ | * 2024-08-20 [https://www.w3.org/reports/ai-web-impact/#safety-and-security AI & the Web: Understanding and managing the impact of Machine Learning models on the Web] from the W3C | ||
+ | * [https://home.treasury.gov/news/press-releases/jy2212U.S. Department of the Treasury Releases Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector] U.S. Department of the Treasury 2024-03-27 | ||
+ | * [https://arxiv.org/pdf/2402.00888.pdf Security and Privacy Challenges of Large Language Models: A Survey] (2024-02) | ||
+ | * Input to the model used by the AI come from two sources, each of which can be compromised. | ||
+ | # Training data from the Internet can be tampered with to give bad results. | ||
+ | ## For starters, the internet data is typically censored to remove input that would cause embarrassment and [[Conduct Risk]] for the sponsors. | ||
+ | ## Some locales will censor data to make themselves look good or promote some social point of view of the government of of the industry. | ||
+ | # The input from a single user can cause unexpected changes to later results. | ||
+ | ## The early AI release were compromised with a variety of hate speech that quickly resulted in really nasty output from the AI. | ||
+ | ## Artists seeking to protect their intellectual property from later output have learned to put small changes in the images that cause them to be mislabeled or worse.<ref>Melissa Hakkiliia, ''This new data-poisoning tool lets artists fight back against generative AI'' MIT Technology Review 127 No 1 (2024-01) p 7=8</ref> | ||
+ | * Giving a AI a goal may result in unexpected and undesired solutions. | ||
+ | ** Bruce Schneier described<ref>Bruce Schneier, ''Hackers Used to Be Humans. Soon, AIs Will Hack Humanity'' Wired (2021-04-18) https://www.wired.com/story/opinion-hackers-used-to-be-humans-soon-ais-will-hack-humanity/?bxid=5c5b250d24c17c67f8640083</ref> AI to be Like crafty genies. They will grant our wishes and then hack them, exploiting our social, political, and economic systems like never before. | ||
+ | |||
+ | ===Uniquely Human=== | ||
+ | Neither the Turning Test, nor any other known test, is designed to determine if an [[Artificial Intelligence]] is really human. The best way to understand what natural intelligence is would be to look at the reason human evolved to have such a large brain. There appear to be these principle requirements:<ref>Edward O. Wilson, ''The Meaning of Human Existences'' Liveright (2014) Chapter 2 ISBN 9780871401007</ref> | ||
+ | # Necessity for a large brain is the changing [[Ecosystem]] that a small remnant of Homo Sapiens living in the south of Africa needed in order to survive | ||
+ | # Changing locations to track new sources of protein and the switch from plant to animal protein. | ||
+ | # Empathy to be able to understand what is in someone else's mind to deduce if they notice you and if they think you are targeted by them as prey. | ||
+ | The results of these were two processes that served to enable survival in such varying [[Ecosystem]]s/ | ||
+ | # Passing learning from one generation to the next. | ||
+ | # An extended adolescence with mothers that ceased being fecund to tend to their existing children and the extended life of grandparents to help with stories and other techniques to bring the knowledge of the tribe to the growing child. | ||
+ | The search for causes that were more complex than the ability of genes to carry information, such as the migratory pattern of birds is described with more detail in the wiki page on [[Chance and Necessity]]. | ||
+ | |||
+ | * [https://www.linkedin.com/pulse/machine-intelligence-artificial-part-4-cybernetics-sean-manion-phd-kfi0e Machine Intelligence is not Artificial - Part 4: Cybernetics and Norbert Wiener]<blockquote>What is of particular note is not simply the existence of cybernetics as a discipline dating back to the 1940s, but the broad popularity Wiener's book and the topic received not only from scientists and engineers, but also the general public. His 1948 book went through four printings in the first 6 months and was a best seller, rivaling the contemporary Kinsey Report in popularity over the following decade. This was despite this book, ostensibly for a lay audience, being extremely dense with many chapters full of mathematical equations. One of the chapters in the original version, "The Computing Machines and the Nervous System," makes it clear that Wiener's mention of humans AND machines in the book's subtitle was pointing towards the comparison of these systems, making this book one of the early popularizers of the idea of "thinking machines" (but by no means the only one). This was two years before Alan Turing's 1950 paper "Computing Machinery and Intelligence," which gave us the Turing test for AI.</blockquote> | ||
+ | |||
+ | * Human-centered AI - [https://aiindex.stanford.edu/report/?utm_source=Stanford+HAI&utm_campaign=525788e5b7-AI_INDEX_2024_EMAIL_CAMPAIGN&utm_medium=email&utm_term=0_-525788e5b7-%5BLIST_EMAIL_ID%5D&mc_cid=525788e5b7&mc_eid=e55581d782 THE AI INDEX REPORT - Measuring trends in AI] 2024 HAI Stanford | ||
+ | |||
+ | ===Explaining AI=== | ||
+ | Most users want to know '''WHY''' a decision was make or an action was performed. The article "Solving for Why"<ref> Marina Kraskovsky, ''Solving for Why'' '''CACM 65''' No 2 (2202-02) pp 11ff</ref> is a focus on [[Causality]] to help researchers overcome shortcomings that have bedeviled more traditional approaches to AI. It is not enough to predict things, a good AI will need to solves things and explain why the choice was make if the decision is to have any support by the human population. People will ask if the decision was fair and will second guess the choices make by the AI unless they understand and accept the decision. Today AI decision are fragile, a slight change in the environment may have drastic changes in outcomes. This may well be inexplicable to humans. It will be insufficient to convince experts in the field about the choices made. It is apparent in 2022 that expert opinions about COVID have not been very popular. Even more evidence will be required of a AI decision. This will be particularly true when the decision is costly in any way. Humans like to have choices, which may require the AI to offer choices. That will have the benefit of giving the humans some feeling that they are in control. But beware of "Hobson's choice". That was a stable master at Oxford that was required to give the students a choice of horses, but selected the options so that there was no real choice. | ||
+ | |||
+ | ===Paradox=== | ||
+ | Moravec’s Paradox describes our experience that it is hard to teach machines to do things that are easy for most humans (walking, for example) but comparatively easy to teach them things that are challenging for most humans (chess comes to mind). | ||
+ | |||
+ | Polanyi’s Paradox is named for the philosopher Michael Polanyi (1891-1976), who developed the concept of “tacit knowledge”: Central to Michael Polanyi’s thinking was the belief that creative acts (especially acts of discovery) are shot-through or charged with strong personal feelings and commitments (hence the title of his most famous work Personal Knowledge). Arguing against the then dominant position that science was somehow value-free, Michael Polanyi sought to bring into creative tension a concern with reasoned and critical interrogation with other, more ‘tacit’, forms of knowing. Polanyi’s argument was that the informed guesses, hunches and imaginings that are part of exploratory acts are motivated by what he describes as ‘passions’. They might well be aimed at discovering ‘truth’, but they are not necessarily [[Formal Model]]s that can be stated in propositional or formal terms. As Michael Polanyi (1967: 4) wrote in The Tacit Dimension, we should start from the fact that ‘we can know more than we can tell‘.<ref>MARK K. SMITH, “MICHAEL POLANYI AND TACIT KNOWLEDGE” AT INFED</ref> | ||
+ | |||
+ | * For synonyms to tacit knowledge see the wiki page on [[Common Sense]]. | ||
+ | * It seems, in 2023, that generative AI is able to overcome the lack of imagination with simple random guessing where knowledge is lacking. | ||
+ | |||
+ | ===Fairness=== | ||
+ | * What is called "Algorithmic Bias" today is just the result of training any [[Artificial Intelligence]] using any data based on human behavior which typically is biased. | ||
+ | * We now expect more ''Fairness'' from our [[Artificial Intelligence]] as we expect from normal humans. | ||
+ | * "The burgeoning field of algorithmic fairness, part of the much broader field of responsible computing, is aiming to remedy the situation"<ref> Marina Krakovsky, '' Formalizing Fairness'' '''CACM 65''' No. 8 (2022-08) p 11ff.</ref> which is the result of using available data. | ||
+ | * Scientists have found that the English word "fairness" has many meanings, and do they would prefer terms like technical disparities, that they can define precisely. The problem with that is that what society demands is "fairness". Which means that scientists could never deliver what society demands. | ||
+ | * As was demonstrated in the COMPASS program to predict recidivism in convicts, it is not possible create a single solution that satisfies more than one metric for "fairness". | ||
+ | * Explainable Fairness means can you explain it to a judge so that they can make a determination. -- Cathy O'Neil ORCAA | ||
+ | * Every provider wants you to think that they are meeting the presidents EO, like the guys pitching Zero Trust Network Access (ZTNA) - do not be confused, this meets no level of [[Least Privilege]] or any other attempt to limit access to people that have earned access. It is nothing but the network folk trying to climb on band wagon. | ||
+ | |||
+ | ===Disruption=== | ||
+ | We can now say what GPT can replace - shallow human conversation and mechanical activity like law or medicine clerks perform. A GPT in a cocktail party could be wittier than anyone else. A GPT could do better than a recent grad of most law or medical schools at citing existing law or medical practice. Is anyone surprised? AI is moving up the food chain from one mechanical task to the next. We learned from aircraft pilot errors that we had to have a checklist to save pilots from error. I guess we could call the current AI a check list of stuff that is already known. | ||
+ | |||
+ | The question about what is left for humans might again be of interest. It was during the industrial revolution when machines replaced human labor. The best that the pundits of the time could suggest it that humans could have more leisure time. Now machines are moving to more tasks. But somehow employment is full in 2022. There is likely to be a reconning some day when machines and their owners will disrupt the system and cause unemployment. The rich and the poor will, yet again, come into conflict and a revolution of some sort will be required. | ||
+ | |||
+ | ===Bias=== | ||
+ | *vClearly any [[Artificial Intelligence]] is biased by its input data. Social pressures have forced any [[Entity]] that offers an AI to make if conform to the laws and regulations of the state. All countries have proscribed some set of topics or responses as unacceptable. In other words all AI are censored by the training data or the laws of the land. Even the language used for training can create biases.<ref>Antony Chayka and Andrei Sukhov, ''Comparing Chatbots Trained in Different Languages'' '''CACM 66''' no 12 pp, 6-7 (2023-12)</ref><blockquote>In our opinion, the government's position is clearly taken into account in the responses of AI systems in the national language, especially when the creation of AI was funded in the tested country.</blockquote> | ||
+ | * We face the same problems in raising our children. If they are raised in an environment that exhibits bias, it will be hard to change their behavior later in life.<ref>Will Douglas Heaven, ''Six Big Questions for Generative AI: Hope, Hype, Nope'' Technology Review (2024-01) p. 30ff. </ref> But if we use synthetic data to train the AI there will be huge objections as it is not possible for any training material (for either humans or AI) to be objectionable to some subset of humanity. | ||
==Solutions== | ==Solutions== | ||
+ | The wiki page on [[Trust]] describes how people can get into the habit of trusting business process or technology process that really do not deserve to be trusted. The following is a listing of some of the technological solutions. The question about whether an [[Artificial Intelligence]] is trustworthy is yet to be addressed. | ||
+ | |||
+ | ===Governance=== | ||
+ | For most of 2023 the focus was on government standards for AI. The idea that trust can be initiated by government oversight is wide-spread, but wrong. Trust comes from a history of fail dealing, that is not something that governments understand or practice. But still the government can, and should, punish crimes, so the legislation should focus on how crimes committed by AI are punished. The death penalty would be one good deterrent. Here is one discussion on what governments can do: | ||
+ | [https://www.linkedin.com/pulse/governance-trust-guy-huntington-cus3c/ AI/bots and National Security]. | ||
+ | ===Training=== | ||
+ | A great deal of effort is directed at the education of young children. In large part education is directed at the production of adults that a useful the the society in which they are expected to function as useful citizens. But their is a sharp disagreement on the subject. According to Ken Robinson, education should have four core purposes: personal, economic, social, and cultural <ref>Ken Robinson, Kate Robinson ''What Is Education For?'' (2022-03-02) https://www.edutopia.org/article/what-education</ref> Perhaps it is time to model the training of AI on the training of humans? | ||
+ | |||
+ | ===User Experience=== | ||
+ | The real question about the use of the current Large Language Model of [[Artificial Intelligence]] is how the device in our possession displays the interaction with the [[Artificial Intelligence]]. We humans have been reading stories and watching displays of [[Artificial Intelligence]] from the puppet Pinocchio that wanted to be a real boy to Deus ex Machina a movie about an [[[[Artificial Intelligence]] robot that went rouge. The common thread here is that each was a fictional account that we were able to relate to. Good authors know how to enliven the characters of a story for the reader that drags the reader into the story. The [[User Experience]] of AI for the near term will be interactions through our personal devices, even though the more interesting, and troubling, case will be when the robotic devices are not in our possession, but directly run by some gigantic corporation. The good story tellers will be the ones that grab our [[Attention]], even as is true today.<ref>Hannah H. Kim, ''A.I. is a Fiction'' Wired pp. 9-10 (2023-08) 31-08</ref> | ||
+ | |||
+ | What we must focus on is the UX and its effect on the people that interact with it. | ||
+ | |||
+ | ===The Smart Agent=== | ||
+ | Now that an A.I. can apply reason to problems, it should be expected that such agents on on the horizon. That prediction was made in 2021, and there seems to be glimmerings of AI appearing in the user agents in our hands in 2023, but it is still all driven by Large Language Models (LLM) in the cloud. Intel is predicting that user devices in 2024 will include and AI. Loading LLM into [[Edge Computing]] devices are needed before AI will work in small devices. | ||
+ | |||
+ | Nvidia has been building GPUs for personal computers for a long time before they were purposed for [[Artificial Intelligence]]. But on 2023-12-14 they were not used for that on personal computers when Intel unveiled [https://www.intel.com/content/www/us/en/newsroom/news/ai-everywhere-core-ultra-5th-gen-xeon-news.html#gs.26279f "Core Ultra and 5th Gen Xeon – that will change how customers and consumers enable AI solutions in the data center, cloud, network, at the edge, and on the PC"] to bring AI to any home computer user with a budget for a new computer. | ||
===Neural Nets=== | ===Neural Nets=== | ||
− | Still the most popular, they appear to have reached a limit in what they can explain about human intelligence. Perhaps we need to add compassion and | + | Still the most popular, they appear to have reached a limit in what they can explain about human intelligence. Perhaps we need to add compassion and [[Common Sense]] back in. |
+ | |||
+ | The theory of neural nets has been documented in Quanta <ref>Kevin Hartnett, ''Foundations Built for a General Theory of Neural Networks.'' (2019-01-31) Quanta https://www.quantamagazine.org/foundations-built-for-a-general-theory-of-neural-networks-20190131/</ref> A description of the Turing award to the winners for applying neural nets to solving problem (as opposed to explaining intelligence) was summarized by Neil Savage<ref>Neil Savage, ''Neural Net Worth.'' '''CACM Vol 62''' no 6 p. 12 (2019-06)</ref> who specifically talks about the need for more involvement of humanities in long term solutions. | ||
− | + | ===Deep Neural Network Accelerators=== | |
+ | Now we come to the logical and nearly the final phase of [[Artificial Intelligence]], the design of one AI by another.<ref>Adam Zewe, ''Accelerating AI tasks while preserving data security'' MIT News (2023-10-30) https://news.mit.edu/2023/accelerating-ai-tasks-while-preserving-data-security-1030</ref> This is probably just the first step to the complete control of the AI [[Ecosystem]] by the AI themselves when the designs become to complex for human understanding.<blockquote>With the proliferation of computationally intensive machine-learning applications, such as chatbots that perform real-time language translation, device manufacturers often incorporate specialized hardware components to rapidly move and process the massive amounts of data these systems demand. Choosing the best design for these components, known as deep neural network accelerators, is challenging because they can have an enormous range of design options. This difficult problem becomes even thornier when a designer seeks to add cryptographic operations to keep data safe from attackers. Now, MIT researchers have developed a search engine that can efficiently identify optimal designs for deep neural network accelerators, that preserve data security while boosting performance.</blockquote> | ||
===Libratus=== | ===Libratus=== | ||
Line 30: | Line 131: | ||
===Alpha Zero=== | ===Alpha Zero=== | ||
− | After | + | After Alpha Go, the first version to use [[Matrix Calculation|Tensor Processing Units]] (TPU) defeating the best human GO play in 2016,<ref>McMillan, Robert. "Google Isn't Playing Games With New Chip". (2016-05-18) The Wall Street Journal. Archived from the original on 29 June 2016. </ref> the Google Deep Mind project Alpha Go was generalized into a far simpler version Alpha Zero<ref>Steven Strogatz, ''One Giant Step for a Chess-Playing Machine.'' (2018-12-26) New York Times https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html</ref> that was just given the rules of the game, whether Go or Chess, and allowed to play against another instance of itself until it had mastered the subject to be able to beat any existing computer model. Garry Kasparov, the world chess said that Alpha Go developed a style of play that "reflects the truth" of the game rather than "the priorities and prejudices of programmers." That pretty much ends the discussion about the superiority of "Intelligent Design" which turns out to be just another myth that humans tell about themselves and their anthropomorphic gods. It also ended the testing of [[Artificial Intelligence]] against humans in board games. There was very little generally newsworthy announcements about [[Artificial Intelligence]] for the next 6 years. |
+ | |||
+ | ===Open AI=== | ||
+ | Open AI Is a research institute founded by Elon Musk and Sam Altman to make AI results available to everyone. Their finding (which echoed that of Deep Mind at Google) in early 2019 was that this is a very bad idea. Like any technology it can ready be turned to evil purposes as reported in Wired.<ref>Alyssa Foote, ''The AI Text Generator That's Too Dangerous to Make Public.'' (219-02-19) Wired Magazine https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/</ref> This finding should certain give pause to any company that seeks to use AI for their own purposes. It is too easy for AI to be turned to the purposed of evil intent. Sam Altman wrote an essay ''Moore's Law for Everything''<ref>Sam Altman ''Moore's Law for Everything'' personal blog (2021) https://moores.samaltman.com/</ref> that was reviewed by Ezra Klein.<ref>Ezra Klein ''Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power'' NY Time (2021-06-11) https://www.nytimes.com/2021/06/11/opinion/ezra-klein-podcast-sam-altman.html?action=click&module=Opinion&pgtype=Homepage</ref> | ||
+ | |||
+ | Not everyone is captivated by Open AI. Quote from the New Yorker: [https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web "OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?"] To be fair, the version called Copilot by Microsoft does provide links to the original documents. | ||
+ | |||
+ | ==Ethics== | ||
+ | Most of the effort in protecting the human population from the dangers of [[Artificial Intelligence]] focus on arguments about ethics. [https://www.youtube.com/results?search_query=Ethical+Quandaries+in+AI-ML An ACM webinar on [https://www.youtube.com/results?search_query=Ethical+Quandaries+in+AI-ML Ethical Challenges in AI] joined a growing list of such efforts on YouTube]. What remains unclear is how AI ethics can be separated from [[Censorship]]. Most of the effort at training AI involves a list of taboo subjects. The problem here is that not everyone agrees on what ethics should apply. For example, is abortion in one country to be considered to be murder and in another country just an exercise in free will. Similar issues are easy to list, such as free speech or social credit. It comes down to a question of whether ethics in AI will ever be any more that just censorship? | ||
+ | |||
+ | ===In Healthcare=== | ||
+ | * [https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Forange.hosting.lsoft.com%2Ftrk%2Fclick%3Fref%3Dznwrbbrs9_6-2be2bx32c520x081583%26&data=04%7C01%7C%7C1b0cfc8a7db9484bb50308d946e0a1e9%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637618752869025374%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=PaRyaYQadjyHbH5wP%2FruYlnD0B5rftISxc8a9dfAJeQ%3D&reserved=0 WHO Releases AI Guidelines for Health Government Technology] 2021-07-12 <blockquote>A new report from the World Health Organization (WHO) offers guidance for the ethical use of artificial intelligence (AI) in the health sector. The six primary principles for the use of AI as set forth in the report are to protect autonomy; promote human well-being, safety, and the public interest; ensure transparency, explainability, and intelligibility; foster [[Responsibility]] and accountability; ensure inclusiveness and equity; and promote responsive and sustainable AI. These principles are intended as a foundation for AI stakeholders, including governments, developers, and society. The report, Ethics and Governance of Artificial Intelligence for Health, finds public- and private-sector collaboration necessary to ensure AI’s accountability.<blockquote> | ||
+ | |||
+ | ===Back to the Future=== | ||
+ | Brian Christian described "The Alignment Problem"<ref>Brian Christian ''The Alignment Problem'' W. W. Norton (2020-10-06) ISBN 978-0-393-86833-3</ref> as the difficulty that AI has with aligning the results provided by the AI with society's goals. It is worse than you might imagine. The results from AI as of 2020 have not proven to be appropriate. This is, in part, because they are trained on data from social systems that are also deemed to be inappropriate. What most researchers are doing from this point is trying to determine how natual intelligence develops and feeding that back into the AI systems. | ||
+ | |||
+ | The first person to try to understand "The Construction of [[Reality]] in the Child' was Jean Piaget<ref>Jean Piaget ''The Construction of Reality in the Child'' (original 1954) Ballentine ISBN 345-02338-2-165</ref>. The first line of the book is "To understand how the budding intelligence constructs the external world, we must ask whether the child, in its first months of life, conceives and perceives things as we do, as object that have substance, that are permanent and of constant dimensions." Now that NMR brain scanners can image a thought within an infant's mind<ref>Rachel Fritts, ''How do Babies perceive the World?'' MIT News July/August 2021 p 24 ff</ref>, we can actually imagine that the brain really is just the Chinese Room that so perplexed Searle.<ref>James Somers, ''Head Space'' The New Yorker Dec. 6, 2021, p 30 ff.</ref> Searle's objection to [[Artificial Intelligence]] was that it was just a symbol manipulation scheme and that cannot be all that is involved in understanding. Interestingly at the same time Noam Chomsky created a generative linguistic model of language that was nothing more than the manipulation of symbols.<ref>George Lakof, ''The Functionalist's Dilemma'', American Scientist https://www.americanscientist.org/article/the-functionalists-dilemma</ref> As you might imagine, they disagreed often in print.<ref>''Chomsky’s Revolution: An Exchange'' (2002-07-18) https://www.nybooks.com/articles/2002/07/18/chomskys-revolution-an-exchange/</ref> See the wiki page on [[Knowledge]] for more on this dispute. | ||
+ | |||
+ | And finally, the AI research community is recognizing the turn of AI towards Natural Science and is not too happy about that?<ref>Subbarao Kambhampati, ''AI as (an Ersatz) Natural Science.'' '''CACM 65''' No 9 (2022-09) pp 8-9</ref> Kambhampati reports that "Indeed, in a 2003 lecture, Turing laureate Lesle Lamport sounded alarms about the very possibility of the future of computing belong to biology rather than logic, saying it will lead us to living in a world of homeopathy and faith healing<ref>Leslie Lamport, ''The Future of Computing: Logic or Biology'' (2003-07-23) https://bit.ly/3ahrsaI</ref> Perhaps we should go back to one of the authors of the ultimate book on logic, the Principia Mathematica Alfred North Whitehead<ref>Alfred North Whitehead, ''Principia Mathematica'' Cambridge UP (Second edition 1927)</ref> who said "Metaphysical categories are not dogmatic statements of the obvious, they are tentative formulation of the ultimate generalities. If we consider any scheme of philosophic categories as one complex assertion, and apply it to the logician's alternative, true or false, the answer must be that the scheme is false. The same answer must be given to like question respecting the existing formulate principles of any science."<ref>Alfred North Whitehead, ''Process and Reality'' Free Press (1929)</ref> | ||
+ | |||
+ | ===Effective Altruism=== | ||
+ | Under the concept that those with wealth should be careful that the resources that the contribute back to society have a positive impact. In other words, they want to control the way their money is used. An effort to impact AI in this way is underway. <ref>Timnit Gerbu ''Effective Altruism Is Pushing a Dangerous Brand of "AI Safety"'' Wired (2022-11-30) https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/</ref><blockquote>EA is defined by the Center for Effective Altruism as “an intellectual project, using evidence and reason to figure out how to benefit others as much as possible.” And “evidence and reason” have led many EAs to conclude that the most pressing problem in the world is preventing an apocalypse where an artificially generally intelligent being (AGI) created by humans exterminates us. To prevent this apocalypse, Some of the billionaires who have committed significant funds to this goal include Elon Musk, Vitalik Buterin, Ben Delo, Jaan Tallinn, Peter Thiel, Dustin Muskovitz, and Sam Bankman-Fried, who was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform. As a result, all of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on “beneficial artificial general intelligence” that will bring techno utopia for humanity. This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it. </blockquote> | ||
+ | |||
+ | Nietzshe had a very dismal view on the efficacy of altruism<ref>Friedrich Nietzsche. ''The Twilight of the Idols'' no 35</ref> <blockquote>An "altruistic" morality, a morality under which selfishness withers, is in all circumstances a bad sign. This is true of individuals and above all of nations. The best are lacking when selfishness begins to be lacking. Instinctively to select that which is harmful to one, to be lured by "disinterested' motives,—these things almost provide the formula for decadence. "Not to have one's own interests at heart" —this is simply a moral fig-leaf concealing a very different fact, a physiological one, to no longer know how to find what is to my interest."... Disintegration of the instincts!—AII is up with man when he becomes altruistic.— instead of saying ingenuously "I am no longer any good," the lie Of morality in the decadent's mouth says: "Nothing is any good,—life is no good."—A judgment of this kind ultimately becomes a great danger; for it is infectious, and it soon flourishes on the soil of society...</blockquote> | ||
+ | |||
+ | A better answer to AI might be to allow those who work directly with underserved people. The best way to measure the success of a society is to see how it treats the least of its members.<ref>Karen Hao ''A new vision of artificial intelligence for the people'' Technology Review (2022-04-22)</ref><blockquote>This story is the fourth and final part of MIT Technology Review’s series on AI colonialism, the idea that artificial intelligence is creating a new colonial world order. The project is a radical departure from the way the AI industry typically operates. Over the last decade, AI researchers have pushed the field to new limits with the dogma “More is more”: Amass more data to produce bigger models (algorithms trained on said data) to produce better results. The approach has led to remarkable breakthroughs—but to costs as well. Companies have relentlessly mined people for their faces, voices, and behaviors to enrich bottom lines. And models built by averaging data from entire populations have sidelined minority and marginalized communities even as they are disproportionately subjected to the technology’s impacts. Over the years, a growing chorus of experts have argued that these impacts are '''repeating the patterns of colonial history.''' Global AI development, they say, is impoverishing communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires.</blockquote> | ||
− | == | + | ==Regulation== |
− | + | The big question is it safe? The simple answer is that new technology always comes with new risks. Antitrust law is the best-known method for controlling enterprises too large to be controlled by the government. Perhaps they can be used to control a super intelligent being of another sort. Perhaps that's all they ever are. Corporations with computers and imposed limitations. | |
+ | * 2024-10-24 [https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/ Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence] | ||
+ | * NIST issued a call for participation on [https://www.federalregister.gov/documents/2023/11/02/2023-24216/artificial-intelligence-safety-institute-consortium Artificial Intelligence Safety Institute Consortium] 2023-11-02 in the federal resister. | ||
+ | * President Bidin issued a [https://www.whitehouse.gov/ostp/ai-bill-of-rights/ Blueprint for an AI Bill of Rights] (2022-10-04) problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent. Five Principles: | ||
+ | # Safe and Effective Systems - You should be protected from unsafe or ineffective systems. | ||
+ | # Algorithmic Discrimination Protections - You should not face discrimination by algorithms and systems should be used and designed in an equitable way. | ||
+ | # Data Privacy - You should be protected from abusive data practices via built-in protections and you should have [[Agency]] over how data about you is used. | ||
+ | # Notice and Explanation- You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. | ||
+ | # Human Alternatives, Consideration, and Fallback - You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. | ||
+ | * A law that went into effect in New York City in January 2023<ref>Richard Vanderford, ''New York AI Bias Law Prompts Uncertainty'' Wall Street Journal (2022-09-21) </ref> requires companies to audit their artificial intelligence (AI) hiring systems to assess whether they incorporate racial and gender biases. The law holds hiring companies liable for any biases and could impose fines for violations, but lacks clear guidelines for the AI audit process. While the city’s Department of Consumer and Worker Protection has not offered a timeline for when it will publish rules to implement the law, some companies already are taking steps to comply. Said Anthony Habayeb of AI governance software company Monitaur Inc., "Instead of waiting for someone to tell me what to do… I built controls around these applications because I know like with any software, things can and do go wrong." | ||
==References== | ==References== | ||
+ | <references /> | ||
+ | |||
+ | ===Other Material=== | ||
+ | * See the wiki page on [[Intelligent Goals]]. | ||
+ | * See the wiki page on [[Machine Learning]]. | ||
+ | * See the wiki page on [[Fairness Accountability Transparency Ethics]] | ||
+ | * See the wiki page on [[Genetic Programming]] | ||
− | [[Category:Glossary]] | + | [[Category: Glossary]] |
− | [[Category:Authentication]] | + | [[Category: Model]] |
+ | [[Category: Authentication]] | ||
+ | [[Category: Artificial Intelligence]] |
Latest revision as of 15:57, 17 November 2024
Full Title or Meme
Intelligence without human vindictiveness or human compassion.
Context
- In the context of this wiki Intelligence relates primarily to Identity Knowledge, but that has many general aspects.
- In 1950 Turing asked "Can Machines Think".[1]
- People have been working on artificial intelligence starting with a conference at Dartmouth College in the 1950s. There, computer scientists and mathematicians set a very ambitious goal: to develop computers that could understand language, translate it, see patterns, and more.
- Marvin Minsky was teaching undergrad courses in 1964 using the book "Computers and Thought."[2]
- in 2023 Bill Gates in "AI is about to completely change how you use computers" said[3] "I still love software as much today as I did when Paul Allen and I started Microsoft. But—even though it has improved a lot in the decades since then—in many ways, software is still pretty dumb.:
- Some people think that Artificial Intelligence must needs be cold or sterile, but we have plenty of Evidence that is not so.
- Elisa was the first but it best chat gpt so I guess the turning test isn't so useful
- Alexa works on Amazon echo devices and is constantly updated so there is nothing stable to compare.
- GPT was getting some press with GPT-2 included in Technology Review tagged it as a break-through technology in 2021[4], but nothing managed to get public Attention until... {See below for more on Open AI}
- Chat-GPT which was the first publicly accessible product that was not completely cringe-worthy on 2022-11-30.
- Is Artificial Intelligence even artificial? Or is it just yet another way to create a balance between Chaos and Order.
- Determining what Artificial Intelligence actually is and whether is is like human intelligence turns out to be a impossible task. Ray Kurzweil said[5]
The reason a well-defined empirical test is necessary is that ... humans have a powerful tendency to redefine whatever artificial intelligence achieves is not really so hard in hindsight. This is often referred to as the "AI effect". ... Intuitively [the hallucinations and odd answers] feels like a problem. It's tempting to think that Watson "ought" to reason like humans do. But i would propose that this is superstition. In the real world, what matters is how an intelligent being acts.
Origin
Over the summer of 1956 a small but illustrious group gathered at Dartmouth College in New Hampshire; it included Claude Shannon, the begetter of information theory, and Herb Simon, the only person ever to win both the Nobel Memorial Prize in Economic Sciences awarded by the Royal Swedish Academy of Sciences and the Turing Award awarded by the Association for Computing Machinery. They had been called together by a young researcher, John McCarthy, who wanted to discuss “how to make machines use language, form abstractions and concepts” and “solve kinds of problems now reserved for humans”. It was the first academic gathering devoted to what McCarthy dubbed “artificial intelligence”. And it set a template for the field’s next 60-odd years in coming up with no advances on a par with its ambitions.[6]
The first international conference was "Mechanisation of Thought Processes"[7] was held in 1958 with presentations by Minsky and John McCarthy.
Perceptron
Also called a McCulloch-Pitts neuron is an attempt to mimic an animal neuron, originally with a purpose built computer system. While it was first described in 1947, it was not realized until Frank Rosenblatt built is Mark I computer using perceptrons as a primary physical element. He continued to build larger machines with more layer until his early death in 1971. Minskey and Papert wrote an entire book[8] where they purported to show that it was a dead-end idea which basically blocked progress on them til the 1980's. The Large Language Models that became popular in the early 2020's are based on Multi-Level Perceptrons (MLP).
End Game
Now i am become death, the destroyer of worlds. Quoted by Robert Oppenheimer at the completion of the Atomic bomb and in the movie "Ex Machina" from the Bhagavad Gita.
Perhaps a good question to ask is, "Can Intelligence Be Separated from the Body?"[9]Maybe the mind is like a video game controller, moving the body around the world, taking it on joy rides. Or maybe the body manipulates the mind with hunger, sleepiness and anxiety, something like a river steering a canoe. Is the mind like electromagnetic waves, flickering in and out of our light-bulb bodies? Or is the mind a car on the road? A ghost in the machine?Perhaps a better question to ask is, "Can Intelligence Be Separated from a persistent Identifier"?
The Identifiers used with AI in 2023 are basically just service Identifiers. in 2023 at the beginning of the general availability of the deployments of AIs that did not embarrass their creators, was also the time that context was provided to the AI to help it understand a thread of a conversation. Before that every query was de novo, without context. But still there is not a persistent history of conversations, or any concept that that is an "I" there to establish a history, which is still the basis of trust. Perhaps we must wait until we are able to crate an AI that persists an Identity, we, as individuals, will be able to establish trust with it, as an individual.
Problems
- Humans seem to not be able to use Intelligent Design alone to fashion Artificial Intelligence. In fact researchers continue to go back to nature and human functioning for inspiration and even for algorithms. In other words, our understanding of the meaning of the term Artificial Intelligence has been and will continue to be evolving over time.
- Nature itself since the Big Bang seems to be based on Self-organization for its framework.
- Training an Artificial Intelligence with human behaviors will result in unacceptable behavior by the Artificial Intelligence. The most common complaint is real, or imagined, or invented, bias against one group or another.
- Microsoft released Tay,[10] a web robot (bot) to respond to tweets and chats[11] in May 2016. The result was disastrous as Tay learned to be racist from its all-too-human trainers and was shut down in days.
- Google has been plagued with reports and legal action on its search results nearly continuously since it was introduced; in 2019 the president, Donald Trump, accusing it of favoritism to leftist causes.[12] Researcher Safiya U. Noble has written a book[13] mostly complaining that all-too-human programmers injected their own prejudices into their work. What else could he expect of humans, to rise above them selves, whatever that might mean in terms of free speech or freedom of religion?
- The page Right to be Forgotten describes an effort in Europe to teach the search engines to with-hold information about people that they don't like by collecting all of the information that they don't like. Be careful what you ask your Artificial Intelligence to do for you; it might just spill the beans some day, perhaps under court order.
- In the movie "Blade Runner 2049" the protagonist's AI girl friend asks to have her memory wiped so that the police cannot compel her to testify against him. One would hope that our future AIs will be that compassionate towards us; but that kind of compassionate behavior will probably be illegal.
- The development of full Artificial Intelligence could spell the end of the human race. - Stephan Hawking BBC Interview 2014. There are lots of dystopian futures depicted by science fiction authors. Some, like the Terminator series seem unlikely. The scenario in the movie Gattaca where humans were separated into categories based on the genome has already been enabled in the Personhood credential[14] which would allow machines to determine by themselves who is qualified to be classified as human.
- Descartes claimed "I think therefore I am". But it seems that we need to be careful about applying that idea to Artificial Intelligence. Although his philosophy is no longer considered to be valid even for humans.
Vulnerabilities
- 2024-08-20 AI & the Web: Understanding and managing the impact of Machine Learning models on the Web from the W3C
- Department of the Treasury Releases Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector U.S. Department of the Treasury 2024-03-27
- Security and Privacy Challenges of Large Language Models: A Survey (2024-02)
- Input to the model used by the AI come from two sources, each of which can be compromised.
- Training data from the Internet can be tampered with to give bad results.
- For starters, the internet data is typically censored to remove input that would cause embarrassment and Conduct Risk for the sponsors.
- Some locales will censor data to make themselves look good or promote some social point of view of the government of of the industry.
- The input from a single user can cause unexpected changes to later results.
- The early AI release were compromised with a variety of hate speech that quickly resulted in really nasty output from the AI.
- Artists seeking to protect their intellectual property from later output have learned to put small changes in the images that cause them to be mislabeled or worse.[15]
- Giving a AI a goal may result in unexpected and undesired solutions.
- Bruce Schneier described[16] AI to be Like crafty genies. They will grant our wishes and then hack them, exploiting our social, political, and economic systems like never before.
Uniquely Human
Neither the Turning Test, nor any other known test, is designed to determine if an Artificial Intelligence is really human. The best way to understand what natural intelligence is would be to look at the reason human evolved to have such a large brain. There appear to be these principle requirements:[17]
- Necessity for a large brain is the changing Ecosystem that a small remnant of Homo Sapiens living in the south of Africa needed in order to survive
- Changing locations to track new sources of protein and the switch from plant to animal protein.
- Empathy to be able to understand what is in someone else's mind to deduce if they notice you and if they think you are targeted by them as prey.
The results of these were two processes that served to enable survival in such varying Ecosystems/
- Passing learning from one generation to the next.
- An extended adolescence with mothers that ceased being fecund to tend to their existing children and the extended life of grandparents to help with stories and other techniques to bring the knowledge of the tribe to the growing child.
The search for causes that were more complex than the ability of genes to carry information, such as the migratory pattern of birds is described with more detail in the wiki page on Chance and Necessity.
- Machine Intelligence is not Artificial - Part 4: Cybernetics and Norbert Wiener
What is of particular note is not simply the existence of cybernetics as a discipline dating back to the 1940s, but the broad popularity Wiener's book and the topic received not only from scientists and engineers, but also the general public. His 1948 book went through four printings in the first 6 months and was a best seller, rivaling the contemporary Kinsey Report in popularity over the following decade. This was despite this book, ostensibly for a lay audience, being extremely dense with many chapters full of mathematical equations. One of the chapters in the original version, "The Computing Machines and the Nervous System," makes it clear that Wiener's mention of humans AND machines in the book's subtitle was pointing towards the comparison of these systems, making this book one of the early popularizers of the idea of "thinking machines" (but by no means the only one). This was two years before Alan Turing's 1950 paper "Computing Machinery and Intelligence," which gave us the Turing test for AI.
- Human-centered AI - THE AI INDEX REPORT - Measuring trends in AI 2024 HAI Stanford
Explaining AI
Most users want to know WHY a decision was make or an action was performed. The article "Solving for Why"[18] is a focus on Causality to help researchers overcome shortcomings that have bedeviled more traditional approaches to AI. It is not enough to predict things, a good AI will need to solves things and explain why the choice was make if the decision is to have any support by the human population. People will ask if the decision was fair and will second guess the choices make by the AI unless they understand and accept the decision. Today AI decision are fragile, a slight change in the environment may have drastic changes in outcomes. This may well be inexplicable to humans. It will be insufficient to convince experts in the field about the choices made. It is apparent in 2022 that expert opinions about COVID have not been very popular. Even more evidence will be required of a AI decision. This will be particularly true when the decision is costly in any way. Humans like to have choices, which may require the AI to offer choices. That will have the benefit of giving the humans some feeling that they are in control. But beware of "Hobson's choice". That was a stable master at Oxford that was required to give the students a choice of horses, but selected the options so that there was no real choice.
Paradox
Moravec’s Paradox describes our experience that it is hard to teach machines to do things that are easy for most humans (walking, for example) but comparatively easy to teach them things that are challenging for most humans (chess comes to mind).
Polanyi’s Paradox is named for the philosopher Michael Polanyi (1891-1976), who developed the concept of “tacit knowledge”: Central to Michael Polanyi’s thinking was the belief that creative acts (especially acts of discovery) are shot-through or charged with strong personal feelings and commitments (hence the title of his most famous work Personal Knowledge). Arguing against the then dominant position that science was somehow value-free, Michael Polanyi sought to bring into creative tension a concern with reasoned and critical interrogation with other, more ‘tacit’, forms of knowing. Polanyi’s argument was that the informed guesses, hunches and imaginings that are part of exploratory acts are motivated by what he describes as ‘passions’. They might well be aimed at discovering ‘truth’, but they are not necessarily Formal Models that can be stated in propositional or formal terms. As Michael Polanyi (1967: 4) wrote in The Tacit Dimension, we should start from the fact that ‘we can know more than we can tell‘.[19]
- For synonyms to tacit knowledge see the wiki page on Common Sense.
- It seems, in 2023, that generative AI is able to overcome the lack of imagination with simple random guessing where knowledge is lacking.
Fairness
- What is called "Algorithmic Bias" today is just the result of training any Artificial Intelligence using any data based on human behavior which typically is biased.
- We now expect more Fairness from our Artificial Intelligence as we expect from normal humans.
- "The burgeoning field of algorithmic fairness, part of the much broader field of responsible computing, is aiming to remedy the situation"[20] which is the result of using available data.
- Scientists have found that the English word "fairness" has many meanings, and do they would prefer terms like technical disparities, that they can define precisely. The problem with that is that what society demands is "fairness". Which means that scientists could never deliver what society demands.
- As was demonstrated in the COMPASS program to predict recidivism in convicts, it is not possible create a single solution that satisfies more than one metric for "fairness".
- Explainable Fairness means can you explain it to a judge so that they can make a determination. -- Cathy O'Neil ORCAA
- Every provider wants you to think that they are meeting the presidents EO, like the guys pitching Zero Trust Network Access (ZTNA) - do not be confused, this meets no level of Least Privilege or any other attempt to limit access to people that have earned access. It is nothing but the network folk trying to climb on band wagon.
Disruption
We can now say what GPT can replace - shallow human conversation and mechanical activity like law or medicine clerks perform. A GPT in a cocktail party could be wittier than anyone else. A GPT could do better than a recent grad of most law or medical schools at citing existing law or medical practice. Is anyone surprised? AI is moving up the food chain from one mechanical task to the next. We learned from aircraft pilot errors that we had to have a checklist to save pilots from error. I guess we could call the current AI a check list of stuff that is already known.
The question about what is left for humans might again be of interest. It was during the industrial revolution when machines replaced human labor. The best that the pundits of the time could suggest it that humans could have more leisure time. Now machines are moving to more tasks. But somehow employment is full in 2022. There is likely to be a reconning some day when machines and their owners will disrupt the system and cause unemployment. The rich and the poor will, yet again, come into conflict and a revolution of some sort will be required.
Bias
- vClearly any Artificial Intelligence is biased by its input data. Social pressures have forced any Entity that offers an AI to make if conform to the laws and regulations of the state. All countries have proscribed some set of topics or responses as unacceptable. In other words all AI are censored by the training data or the laws of the land. Even the language used for training can create biases.[21]
In our opinion, the government's position is clearly taken into account in the responses of AI systems in the national language, especially when the creation of AI was funded in the tested country.
- We face the same problems in raising our children. If they are raised in an environment that exhibits bias, it will be hard to change their behavior later in life.[22] But if we use synthetic data to train the AI there will be huge objections as it is not possible for any training material (for either humans or AI) to be objectionable to some subset of humanity.
Solutions
The wiki page on Trust describes how people can get into the habit of trusting business process or technology process that really do not deserve to be trusted. The following is a listing of some of the technological solutions. The question about whether an Artificial Intelligence is trustworthy is yet to be addressed.
Governance
For most of 2023 the focus was on government standards for AI. The idea that trust can be initiated by government oversight is wide-spread, but wrong. Trust comes from a history of fail dealing, that is not something that governments understand or practice. But still the government can, and should, punish crimes, so the legislation should focus on how crimes committed by AI are punished. The death penalty would be one good deterrent. Here is one discussion on what governments can do: AI/bots and National Security.
Training
A great deal of effort is directed at the education of young children. In large part education is directed at the production of adults that a useful the the society in which they are expected to function as useful citizens. But their is a sharp disagreement on the subject. According to Ken Robinson, education should have four core purposes: personal, economic, social, and cultural [23] Perhaps it is time to model the training of AI on the training of humans?
User Experience
The real question about the use of the current Large Language Model of Artificial Intelligence is how the device in our possession displays the interaction with the Artificial Intelligence. We humans have been reading stories and watching displays of Artificial Intelligence from the puppet Pinocchio that wanted to be a real boy to Deus ex Machina a movie about an [[Artificial Intelligence robot that went rouge. The common thread here is that each was a fictional account that we were able to relate to. Good authors know how to enliven the characters of a story for the reader that drags the reader into the story. The User Experience of AI for the near term will be interactions through our personal devices, even though the more interesting, and troubling, case will be when the robotic devices are not in our possession, but directly run by some gigantic corporation. The good story tellers will be the ones that grab our Attention, even as is true today.[24]
What we must focus on is the UX and its effect on the people that interact with it.
The Smart Agent
Now that an A.I. can apply reason to problems, it should be expected that such agents on on the horizon. That prediction was made in 2021, and there seems to be glimmerings of AI appearing in the user agents in our hands in 2023, but it is still all driven by Large Language Models (LLM) in the cloud. Intel is predicting that user devices in 2024 will include and AI. Loading LLM into Edge Computing devices are needed before AI will work in small devices.
Nvidia has been building GPUs for personal computers for a long time before they were purposed for Artificial Intelligence. But on 2023-12-14 they were not used for that on personal computers when Intel unveiled "Core Ultra and 5th Gen Xeon – that will change how customers and consumers enable AI solutions in the data center, cloud, network, at the edge, and on the PC" to bring AI to any home computer user with a budget for a new computer.
Neural Nets
Still the most popular, they appear to have reached a limit in what they can explain about human intelligence. Perhaps we need to add compassion and Common Sense back in.
The theory of neural nets has been documented in Quanta [25] A description of the Turing award to the winners for applying neural nets to solving problem (as opposed to explaining intelligence) was summarized by Neil Savage[26] who specifically talks about the need for more involvement of humanities in long term solutions.
Deep Neural Network Accelerators
Now we come to the logical and nearly the final phase of Artificial Intelligence, the design of one AI by another.[27] This is probably just the first step to the complete control of the AI Ecosystem by the AI themselves when the designs become to complex for human understanding.With the proliferation of computationally intensive machine-learning applications, such as chatbots that perform real-time language translation, device manufacturers often incorporate specialized hardware components to rapidly move and process the massive amounts of data these systems demand. Choosing the best design for these components, known as deep neural network accelerators, is challenging because they can have an enormous range of design options. This difficult problem becomes even thornier when a designer seeks to add cryptographic operations to keep data safe from attackers. Now, MIT researchers have developed a search engine that can efficiently identify optimal designs for deep neural network accelerators, that preserve data security while boosting performance.
Libratus
Libratus did not use expert domain knowledge, or human data that are specific to [the game domain]. Rather, the AI was able to analyze the game's rules and devise its own strategy. The technology thus could be applied to any number of imperfect-information games. Such hidden information is ubiquitous in real-world strategic interactions, including business negotiation, cybersecurity, finance, strategic pricing and military applications.[28]
Alpha Zero
After Alpha Go, the first version to use Tensor Processing Units (TPU) defeating the best human GO play in 2016,[29] the Google Deep Mind project Alpha Go was generalized into a far simpler version Alpha Zero[30] that was just given the rules of the game, whether Go or Chess, and allowed to play against another instance of itself until it had mastered the subject to be able to beat any existing computer model. Garry Kasparov, the world chess said that Alpha Go developed a style of play that "reflects the truth" of the game rather than "the priorities and prejudices of programmers." That pretty much ends the discussion about the superiority of "Intelligent Design" which turns out to be just another myth that humans tell about themselves and their anthropomorphic gods. It also ended the testing of Artificial Intelligence against humans in board games. There was very little generally newsworthy announcements about Artificial Intelligence for the next 6 years.
Open AI
Open AI Is a research institute founded by Elon Musk and Sam Altman to make AI results available to everyone. Their finding (which echoed that of Deep Mind at Google) in early 2019 was that this is a very bad idea. Like any technology it can ready be turned to evil purposes as reported in Wired.[31] This finding should certain give pause to any company that seeks to use AI for their own purposes. It is too easy for AI to be turned to the purposed of evil intent. Sam Altman wrote an essay Moore's Law for Everything[32] that was reviewed by Ezra Klein.[33]
Not everyone is captivated by Open AI. Quote from the New Yorker: "OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?" To be fair, the version called Copilot by Microsoft does provide links to the original documents.
Ethics
Most of the effort in protecting the human population from the dangers of Artificial Intelligence focus on arguments about ethics. An ACM webinar on [https://www.youtube.com/results?search_query=Ethical+Quandaries+in+AI-ML Ethical Challenges in AI joined a growing list of such efforts on YouTube]. What remains unclear is how AI ethics can be separated from Censorship. Most of the effort at training AI involves a list of taboo subjects. The problem here is that not everyone agrees on what ethics should apply. For example, is abortion in one country to be considered to be murder and in another country just an exercise in free will. Similar issues are easy to list, such as free speech or social credit. It comes down to a question of whether ethics in AI will ever be any more that just censorship?
In Healthcare
- WHO Releases AI Guidelines for Health Government Technology 2021-07-12
A new report from the World Health Organization (WHO) offers guidance for the ethical use of artificial intelligence (AI) in the health sector. The six primary principles for the use of AI as set forth in the report are to protect autonomy; promote human well-being, safety, and the public interest; ensure transparency, explainability, and intelligibility; foster Responsibility and accountability; ensure inclusiveness and equity; and promote responsive and sustainable AI. These principles are intended as a foundation for AI stakeholders, including governments, developers, and society. The report, Ethics and Governance of Artificial Intelligence for Health, finds public- and private-sector collaboration necessary to ensure AI’s accountability.
Back to the Future
Brian Christian described "The Alignment Problem"[34] as the difficulty that AI has with aligning the results provided by the AI with society's goals. It is worse than you might imagine. The results from AI as of 2020 have not proven to be appropriate. This is, in part, because they are trained on data from social systems that are also deemed to be inappropriate. What most researchers are doing from this point is trying to determine how natual intelligence develops and feeding that back into the AI systems.
The first person to try to understand "The Construction of Reality in the Child' was Jean Piaget[35]. The first line of the book is "To understand how the budding intelligence constructs the external world, we must ask whether the child, in its first months of life, conceives and perceives things as we do, as object that have substance, that are permanent and of constant dimensions." Now that NMR brain scanners can image a thought within an infant's mind[36], we can actually imagine that the brain really is just the Chinese Room that so perplexed Searle.[37] Searle's objection to Artificial Intelligence was that it was just a symbol manipulation scheme and that cannot be all that is involved in understanding. Interestingly at the same time Noam Chomsky created a generative linguistic model of language that was nothing more than the manipulation of symbols.[38] As you might imagine, they disagreed often in print.[39] See the wiki page on Knowledge for more on this dispute.
And finally, the AI research community is recognizing the turn of AI towards Natural Science and is not too happy about that?[40] Kambhampati reports that "Indeed, in a 2003 lecture, Turing laureate Lesle Lamport sounded alarms about the very possibility of the future of computing belong to biology rather than logic, saying it will lead us to living in a world of homeopathy and faith healing[41] Perhaps we should go back to one of the authors of the ultimate book on logic, the Principia Mathematica Alfred North Whitehead[42] who said "Metaphysical categories are not dogmatic statements of the obvious, they are tentative formulation of the ultimate generalities. If we consider any scheme of philosophic categories as one complex assertion, and apply it to the logician's alternative, true or false, the answer must be that the scheme is false. The same answer must be given to like question respecting the existing formulate principles of any science."[43]
Effective Altruism
Under the concept that those with wealth should be careful that the resources that the contribute back to society have a positive impact. In other words, they want to control the way their money is used. An effort to impact AI in this way is underway. [44]EA is defined by the Center for Effective Altruism as “an intellectual project, using evidence and reason to figure out how to benefit others as much as possible.” And “evidence and reason” have led many EAs to conclude that the most pressing problem in the world is preventing an apocalypse where an artificially generally intelligent being (AGI) created by humans exterminates us. To prevent this apocalypse, Some of the billionaires who have committed significant funds to this goal include Elon Musk, Vitalik Buterin, Ben Delo, Jaan Tallinn, Peter Thiel, Dustin Muskovitz, and Sam Bankman-Fried, who was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform. As a result, all of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on “beneficial artificial general intelligence” that will bring techno utopia for humanity. This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it.Nietzshe had a very dismal view on the efficacy of altruism[45]
An "altruistic" morality, a morality under which selfishness withers, is in all circumstances a bad sign. This is true of individuals and above all of nations. The best are lacking when selfishness begins to be lacking. Instinctively to select that which is harmful to one, to be lured by "disinterested' motives,—these things almost provide the formula for decadence. "Not to have one's own interests at heart" —this is simply a moral fig-leaf concealing a very different fact, a physiological one, to no longer know how to find what is to my interest."... Disintegration of the instincts!—AII is up with man when he becomes altruistic.— instead of saying ingenuously "I am no longer any good," the lie Of morality in the decadent's mouth says: "Nothing is any good,—life is no good."—A judgment of this kind ultimately becomes a great danger; for it is infectious, and it soon flourishes on the soil of society...A better answer to AI might be to allow those who work directly with underserved people. The best way to measure the success of a society is to see how it treats the least of its members.[46]
This story is the fourth and final part of MIT Technology Review’s series on AI colonialism, the idea that artificial intelligence is creating a new colonial world order. The project is a radical departure from the way the AI industry typically operates. Over the last decade, AI researchers have pushed the field to new limits with the dogma “More is more”: Amass more data to produce bigger models (algorithms trained on said data) to produce better results. The approach has led to remarkable breakthroughs—but to costs as well. Companies have relentlessly mined people for their faces, voices, and behaviors to enrich bottom lines. And models built by averaging data from entire populations have sidelined minority and marginalized communities even as they are disproportionately subjected to the technology’s impacts. Over the years, a growing chorus of experts have argued that these impacts are repeating the patterns of colonial history. Global AI development, they say, is impoverishing communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires.
Regulation
The big question is it safe? The simple answer is that new technology always comes with new risks. Antitrust law is the best-known method for controlling enterprises too large to be controlled by the government. Perhaps they can be used to control a super intelligent being of another sort. Perhaps that's all they ever are. Corporations with computers and imposed limitations.
- 2024-10-24 Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence
- NIST issued a call for participation on Artificial Intelligence Safety Institute Consortium 2023-11-02 in the federal resister.
- President Bidin issued a Blueprint for an AI Bill of Rights (2022-10-04) problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent. Five Principles:
- Safe and Effective Systems - You should be protected from unsafe or ineffective systems.
- Algorithmic Discrimination Protections - You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
- Data Privacy - You should be protected from abusive data practices via built-in protections and you should have Agency over how data about you is used.
- Notice and Explanation- You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
- Human Alternatives, Consideration, and Fallback - You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
- A law that went into effect in New York City in January 2023[47] requires companies to audit their artificial intelligence (AI) hiring systems to assess whether they incorporate racial and gender biases. The law holds hiring companies liable for any biases and could impose fines for violations, but lacks clear guidelines for the AI audit process. While the city’s Department of Consumer and Worker Protection has not offered a timeline for when it will publish rules to implement the law, some companies already are taking steps to comply. Said Anthony Habayeb of AI governance software company Monitaur Inc., "Instead of waiting for someone to tell me what to do… I built controls around these applications because I know like with any software, things can and do go wrong."
References
- ↑ A. M. Turing, COMPUTING MACHINERY AND INTELLIGENCE 1950-10-01 https://academic.oup.com/mind/article/LIX/236/433/986238
- ↑ Edward Feigenbaum, & Julian Feldman, joint ed Computers and Thought https://archive.org/details/computersthought00feig
- ↑ Bill Gates AI is about to completely change how you use computers https://www.gatesnotes.com/AI-agents
- ↑ Will Douglas, GPT-3 Technology Review Vol 124 No 2 (2021-03) pp. 34-35
- ↑ Ray Kurzweil, The Singularity is Nearer
- ↑ John McCarthy, M. Minsky, N. Rochester, C.E Shannon, A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (1955-08) http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf
- ↑ National Physical Laboratory at Teddington,Mechanisation of Thought Processes https://aitopics.org/download/aiclassics:98875BCD
- ↑ M. Minsky & S. Papert, Perceptrons (1987-12-28) ISBN 978-0262631112
- ↑ Oliver Whang, Can Intelligence Be Separated From the Body? New York Times (2023-04-11) https://www.nytimes.com/2023/04/11/science/artificial-intelligence-body-robots.html?
- ↑ Peter Lee, Learning from Tay’s introduction. (2016-05-25) https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
- ↑ Sarah Parez, Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism. (2018-03-24) Tech Crunch https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/
- ↑ Farhad Manjoo, Search Bias, Blind Spots And Google. (2018-08-31) New York Times p. B1
- ↑ Safiya U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism. (2018-02-20) ISBN 978-1479849949
- ↑ Pymnts, AI’s Human Mimicry Spurs ‘Personhood Credential’ Proposal 2024-09-24 https://www.pymnts.com/digital-identity/2024/ais-human-mimicry-spurs-personhood-credential-proposal/
- ↑ Melissa Hakkiliia, This new data-poisoning tool lets artists fight back against generative AI MIT Technology Review 127 No 1 (2024-01) p 7=8
- ↑ Bruce Schneier, Hackers Used to Be Humans. Soon, AIs Will Hack Humanity Wired (2021-04-18) https://www.wired.com/story/opinion-hackers-used-to-be-humans-soon-ais-will-hack-humanity/?bxid=5c5b250d24c17c67f8640083
- ↑ Edward O. Wilson, The Meaning of Human Existences Liveright (2014) Chapter 2 ISBN 9780871401007
- ↑ Marina Kraskovsky, Solving for Why CACM 65 No 2 (2202-02) pp 11ff
- ↑ MARK K. SMITH, “MICHAEL POLANYI AND TACIT KNOWLEDGE” AT INFED
- ↑ Marina Krakovsky, Formalizing Fairness CACM 65 No. 8 (2022-08) p 11ff.
- ↑ Antony Chayka and Andrei Sukhov, Comparing Chatbots Trained in Different Languages CACM 66 no 12 pp, 6-7 (2023-12)
- ↑ Will Douglas Heaven, Six Big Questions for Generative AI: Hope, Hype, Nope Technology Review (2024-01) p. 30ff.
- ↑ Ken Robinson, Kate Robinson What Is Education For? (2022-03-02) https://www.edutopia.org/article/what-education
- ↑ Hannah H. Kim, A.I. is a Fiction Wired pp. 9-10 (2023-08) 31-08
- ↑ Kevin Hartnett, Foundations Built for a General Theory of Neural Networks. (2019-01-31) Quanta https://www.quantamagazine.org/foundations-built-for-a-general-theory-of-neural-networks-20190131/
- ↑ Neil Savage, Neural Net Worth. CACM Vol 62 no 6 p. 12 (2019-06)
- ↑ Adam Zewe, Accelerating AI tasks while preserving data security MIT News (2023-10-30) https://news.mit.edu/2023/accelerating-ai-tasks-while-preserving-data-security-1030
- ↑ Byron Spice, CSD's Sandholm, Brown To Receive Minsky Medal. (2018-11-06) Carnegie-Mellon News https://www.cmu.edu/news/stories/archives/2018/november/minsky-medal.html
- ↑ McMillan, Robert. "Google Isn't Playing Games With New Chip". (2016-05-18) The Wall Street Journal. Archived from the original on 29 June 2016.
- ↑ Steven Strogatz, One Giant Step for a Chess-Playing Machine. (2018-12-26) New York Times https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html
- ↑ Alyssa Foote, The AI Text Generator That's Too Dangerous to Make Public. (219-02-19) Wired Magazine https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/
- ↑ Sam Altman Moore's Law for Everything personal blog (2021) https://moores.samaltman.com/
- ↑ Ezra Klein Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power NY Time (2021-06-11) https://www.nytimes.com/2021/06/11/opinion/ezra-klein-podcast-sam-altman.html?action=click&module=Opinion&pgtype=Homepage
- ↑ Brian Christian The Alignment Problem W. W. Norton (2020-10-06) ISBN 978-0-393-86833-3
- ↑ Jean Piaget The Construction of Reality in the Child (original 1954) Ballentine ISBN 345-02338-2-165
- ↑ Rachel Fritts, How do Babies perceive the World? MIT News July/August 2021 p 24 ff
- ↑ James Somers, Head Space The New Yorker Dec. 6, 2021, p 30 ff.
- ↑ George Lakof, The Functionalist's Dilemma, American Scientist https://www.americanscientist.org/article/the-functionalists-dilemma
- ↑ Chomsky’s Revolution: An Exchange (2002-07-18) https://www.nybooks.com/articles/2002/07/18/chomskys-revolution-an-exchange/
- ↑ Subbarao Kambhampati, AI as (an Ersatz) Natural Science. CACM 65 No 9 (2022-09) pp 8-9
- ↑ Leslie Lamport, The Future of Computing: Logic or Biology (2003-07-23) https://bit.ly/3ahrsaI
- ↑ Alfred North Whitehead, Principia Mathematica Cambridge UP (Second edition 1927)
- ↑ Alfred North Whitehead, Process and Reality Free Press (1929)
- ↑ Timnit Gerbu Effective Altruism Is Pushing a Dangerous Brand of "AI Safety" Wired (2022-11-30) https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/
- ↑ Friedrich Nietzsche. The Twilight of the Idols no 35
- ↑ Karen Hao A new vision of artificial intelligence for the people Technology Review (2022-04-22)
- ↑ Richard Vanderford, New York AI Bias Law Prompts Uncertainty Wall Street Journal (2022-09-21)
Other Material
- See the wiki page on Intelligent Goals.
- See the wiki page on Machine Learning.
- See the wiki page on Fairness Accountability Transparency Ethics
- See the wiki page on Genetic Programming