Difference between revisions of "Uncertainty"

From MgmtWiki
Jump to: navigation, search
(References=)
(Fourier Transforms)
Line 12: Line 12:
 
Uncertainty principles are not formally defined. In general, uncertainty principles
 
Uncertainty principles are not formally defined. In general, uncertainty principles
 
refer to a meta-theorem in Fourier analysis that states that a nonzero function and
 
refer to a meta-theorem in Fourier analysis that states that a nonzero function and
its Fourier transform cannot be localized to arbitrary precision.<ref>Kabir Dubey ''The Fourier uncertainty Principles'' https://math.uchicago.edu/~may/REU2021/REUPapers/Dubey.pdf</ref>
+
its Fourier transform cannot be localized to arbitrary precision.<ref>Kabir Dubey ''The Fourier uncertainty Principles'' University of Chicgo https://math.uchicago.edu/~may/REU2021/REUPapers/Dubey.pdf</ref>
  
 
More<ref>Emanuele Pesaresi, ''Uncertainty Principle Derivation from Fourier Analysis''</ref>
 
More<ref>Emanuele Pesaresi, ''Uncertainty Principle Derivation from Fourier Analysis''</ref>

Revision as of 08:55, 6 May 2023

Full Title or Meme

Context

Karl Pearson, the English statistician and geneticist, is commonly credited with first describing the concept of uncertainty as a measure of data variability in the late 1800s.[1] Before Pearson, scientists realized that their measurements incorporated variability, but they assumed that this variability was simply due to error. For example, measurement of the orbits of planets around the sun taken by different scientists at different times varied, and this variability was thought to be due to errors caused by inadequate instrumentation. The French mathematician Pierre-Simon Laplace discussed a method for quantifying error distributions of astronomical measurements caused by small errors associated with instrument shortcomings as early as 1820. As technology improved through the 1800s, astronomers realized that they could reduce, but not eliminate this error in their measurements.

Pearson put forward a revolutionary idea: Uncertainty, he proposed, was not simply due to the limits of technology in measuring certain events – it was inherent in nature. Even the most careful and rigorous scientific investigation (or any type of investigation for that matter) could not yield an exact measurement. Rather, repeating an investigation would yield a scatter of measurements that are distributed around some central value. This scatter would be caused not only by error, but also by natural variability. In other words, measurements themselves, independent of any human or instrument inaccuracies, exhibit scatter.

Whether it is the flight path of an arrow, the resting heart rate of an adult male, or the age of a historical artifact, measurements do not have exact values, but instead always exhibit a range of values, and that range can be quantified as uncertainty. This uncertainty can be expressed as a plot of the probability of obtaining a certain value, and the probabilities are distributed about some central, or mean, value.[2]

Fourier Transforms

Uncertainty principles are not formally defined. In general, uncertainty principles refer to a meta-theorem in Fourier analysis that states that a nonzero function and its Fourier transform cannot be localized to arbitrary precision.[3]

More[4]

References

  1. D. Salsburg, The lady tasting tea: How statistics revolutionized science in the twentieth century. New York: W. H. Freeman (2001) ISBN‎ 9780805071344
  2. Anthony Carpi+1, Uncertainty, Error, and Confidence https://www.visionlearning.com/en/library/Process-of-Science/49/Uncertainty-Error-and-Confidence/157
  3. Kabir Dubey The Fourier uncertainty Principles University of Chicgo https://math.uchicago.edu/~may/REU2021/REUPapers/Dubey.pdf
  4. Emanuele Pesaresi, Uncertainty Principle Derivation from Fourier Analysis