Risk Analysis

From MgmtWiki
Revision as of 22:01, 10 August 2021 by Tom (talk | contribs) (Problems)

Jump to: navigation, search

Full Title or Meme

Risk Assessment can take many forms depending on the principal that bears the risk of loss and the metric that is to be minimized.

Context

While this topic focuses just on the risk of Identity Management there are still many principals that can suffer loss in any digital interchange.

  1. Users
  2. Identifier and Appribute Providers
  3. Relying Partiers
  4. Tursted Third Parties

There are several objective:

  1. Liability to tort actions
  2. Fines from Government bodies
  3. Loss of reputation
  4. Loss of Privacy
  5. The appearance of fairness in the court of public opinion.

Problems

The goal of a Risk Management system will have major implications for the type of Risk Assessment performed.

For example, it has been shown that it is not possible for predictions to be made about the likelihood of future events without a series of assumptions, which have been shown to be unlikely to all be simultaneously true

  1. the base rate statistics need to be the same for different sub populations of the sample.
  2. there is no algorithm that can be generated to classify a mixed population in the same category for the sub population (say race or gender)

Mark Twain had it right when he declared that there are liers, there are damn liers and there are statisticians. It isn't so much that the statisticians are lying as that they are not capable of explain the mathematics to the people that are reading the statistics.

Pro Publica Article

A good example of the problem with statistics was highlighted in the Pro Publica Article[1] Sentencing guideline were created to help judges make unbiased sentences based on a criminals likelihood of reoffending if released. The company can created the software tested the results using statistics that did not consider the offers race, and so claimed that it was unbiased. The problem was that when race was considered it was shown to be wilding wrong about the recidivism of blacks versus whites. The question was then asked how these results could re reconciled. The answer was that they cannot. If statistics are used in a place where they are not valided, nothing can be said about their validity. As a results blacks were unfairly sentenced to longer terms than were whites.

References

  1. Julia Angwin +3, Machine Bias - There’s software used across the country to predict future criminals. And it’s biased against blacks. (2016-05-23) ProPublica https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing