Difference between revisions of "Modeling Ecosystems"

From MgmtWiki
Jump to: navigation, search
m (Introduction)
m (All Models are Wrong)
Line 9: Line 9:
  
 
==All Models are Wrong==
 
==All Models are Wrong==
The professor of statistics George Box noted<ref>George E. P. Box +2, ''Statistics for Experimenters''</ref> that "All models are wrong; but some models are useful." In physics there as been sufficient modeling since Newton to narrow the discrepancy of reality and the model to a very small percentage. Artificial Intelligence (AI) has been an experimental science only since 1960 and so has not produced models of intelligence that are particularly close to the real thing. Brian Christian described "the Alignment Problem"<ref>Brian Christian ''the Alignment Problem'' W. W. Norton (2020-10=06) ISBN 978-0-393-86833-3</ref> as the difficulty that AI has will aligning the results of providing an AI with the results obtained. It is worse than you might imagine. When [[Machine Learning]] is introduced into society before it has been fully vetted, we must expect that the results will not align with human values. By 2016 the US already had an AI deployed for predicting the future of criminals what was blatantly racist<ref> Julia Angwin +3, '' Machine Bias'' ProPublica (2016-05-23) https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing</ref> But since the AI was trained on real-word data, it was just doing the same thing any any human would do given the same inputs since society as a whole was also racist. The point here is that both the racist profiler and the physics standard model are doing the same thing. it just that one system is more regular than the other.
+
The professor of statistics George Box noted<ref>George E. P. Box +2, ''Statistics for Experimenters''</ref> that "All models are wrong; but some models are useful." In physics there as been sufficient modeling since Newton to narrow the discrepancy of reality and the model to a very small percentage. Artificial Intelligence (AI) has been an experimental science only since 1960 and so has not produced models of intelligence that are particularly close to the real thing. Brian Christian described "the Alignment Problem"<ref>Brian Christian ''the Alignment Problem'' W. W. Norton (2020-10=06) ISBN 978-0-393-86833-3</ref> as the difficulty that AI has with aligning the results of providing an AI with the results obtained. It is worse than you might imagine. When [[Machine Learning]] is introduced into society before it has been fully vetted, we must expect that the results will not align with human values. By 2016 the US already had an AI deployed for predicting the future of criminals what was blatantly racist<ref> Julia Angwin +3, '' Machine Bias'' ProPublica (2016-05-23) https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing</ref> But since the AI was trained on real-word data, it was just doing the same thing any any human would do given the same inputs since society as a whole was also racist. The point here is that both the racist profiler and the physics standard model are doing the same thing. it just that one system is more regular than the other.
  
 
==References==
 
==References==

Revision as of 19:45, 6 December 2021

Full Title

Modeling of a system that is under constant change creates challenges that not not well understood.

Introduction

Call it Induction, Correlation, Simulation or Modeling, the basic rule is to start with a system at equilibrium or Homeostasis so that the math is not too difficult. But real systems are never at equilibrium, those are all figments of the physicist's imagination. If a system is at equilibrium, it is not able to change, and so is of limited interest to anybody. Even when an equilibrium can be achieved in a laboratory, the question is still present. What exactly are we comparing reality to? Or, in other words, are we discussing auto-correlation or cross-correlation? Is the system self-similar or a model of other other physical variable that might be nothing like. For example physicists equate black-holes with information theory or gas particles.[1] But philosophers think the physicists might be all wrong about that.[2] Even philosophers can't egree with the Platonists telling us that the ideal form comes first and the reality is just a poor copy of the ideal and the empiricists telling us that first we build a chair and then we create an abstract idea of "chair". But in reality the chair is nothing by an anthropomorphic attempt to minimize energy consumption so a human, who is just the creation of random acts of evolution, can put their attention to some details of a social system that is itself an accidental organization of thousands of years of human attempts to avoid killing each other. Not only the the chair a complete accident, but so is the human a complete accident of evolution.

Dynamic Systems

Without induction, the human species, or any other mammalian species, would not have survived long enough to procreate. They need induction to understand where to find food, and how to avoid becoming food. The philosophical argument about whether induction can lead to truth pales in comparison to where we would be without it. So too with the discussion about mathematical models of reality. Reality exists without the models. The models do not exist without reality.

All Models are Wrong

The professor of statistics George Box noted[3] that "All models are wrong; but some models are useful." In physics there as been sufficient modeling since Newton to narrow the discrepancy of reality and the model to a very small percentage. Artificial Intelligence (AI) has been an experimental science only since 1960 and so has not produced models of intelligence that are particularly close to the real thing. Brian Christian described "the Alignment Problem"[4] as the difficulty that AI has with aligning the results of providing an AI with the results obtained. It is worse than you might imagine. When Machine Learning is introduced into society before it has been fully vetted, we must expect that the results will not align with human values. By 2016 the US already had an AI deployed for predicting the future of criminals what was blatantly racist[5] But since the AI was trained on real-word data, it was just doing the same thing any any human would do given the same inputs since society as a whole was also racist. The point here is that both the racist profiler and the physics standard model are doing the same thing. it just that one system is more regular than the other.

References

  1. Marco Tavora, Black Hole Entropy and the Laws of Thermodynamics The Remarkable Similarities Between the Black Holes Mechanics and Laws of Thermodynamics (2020-04-11) Medium https://medium.com/cantors-paradise/black-hole-entropy-and-the-laws-of-thermodynamics-d85fd5d5cce2
  2. Craig Callender, quoted by Brendan Z. Foster Are We All Wrong About Black Holes? (2020-09-05) quantamagazine.org/craig-callender-are-we-all-wrong-about-black-holes-20190905/
  3. George E. P. Box +2, Statistics for Experimenters
  4. Brian Christian the Alignment Problem W. W. Norton (2020-10=06) ISBN 978-0-393-86833-3
  5. Julia Angwin +3, Machine Bias ProPublica (2016-05-23) https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing