From MgmtWiki
Revision as of 14:13, 21 July 2022 by Tom (talk | contribs) (Terminology)

Jump to: navigation, search

Full Title or Meme

Trust, simply put, is just the projection into the future of past behavior by a Subject (person or organization) that has a consistent Identity over time.


Tom Jones 2018-07-25 updated 2019-11-11

Goals and Scope

  • The goal of trust is for an action to be initiated or completed. If two parties do not trust each other they may need a third part to mediate the decision process.
  • The content on this page is intended to describe the tools to evaluate what can be trusted by an individual in a digital age.
  • There is also a section on bootstrapping trust from Attested hardware. More detail is on the Attested page.
  • The primary interaction of interest here is consumer to business (C2B); although a comparison will be made to business to business (B2B) and consumer to government (C2G).


For consistency with related NIST reports, this page uses the following definitions for trust-related terms:

  • Trust: “The confidence one element has in another that the second element will behave as expected.”
  • Trusted: An element that another element relies upon to fulfill critical requirements on its behalf.
  • Trusted boot: A system boot where aspects of the hardware and firmware are measured and compared against known good values to verify their integrity and thus their trustworthiness.
  • Trustworthy: Worthy of being trusted to fulfill whatever critical requirements may be needed.[1]


  • Trust exists in the space between hope and fear. We hope for the best, but fear more that the worst will happen. Unfortunately the overriding challenge is lack of attention. The internet would be paralyzed if every transaction had to be fully examined. But this constant attention seems to be what Web Site developers expect of their customers. You can hear that expectation whenever you hear someone asking for more "education of the consumers." Somehow there needs to be a easier way to create a trusted ecosystem for the internet.
  • Creation of trust in a digital ecosystem is both very hard and very easy. It is very easy in the sense that we have great examples of trustworthy ecosystems in eBay and Amazon. [2] It is very hard in the sense that creating and maintaining such an ecosystem is an awful lot of real continuing effort that must be diligently and faithfully nurtured. Andrei Hagiu's theory can explain why platforms, seem to steer consumers towards established products and sellers.[3] The major problems are Recovery and Redress, which are further described on those wiki pages. All of the current tumult about trust on the internet is created by a bunch of legislatures and technologists that are trying to convince the world that a problem exists that they are uniquely capable of resolving. Caveat Emptor. The following looks at some of the open issues.
  • Cheat me once, shame on you. Cheat me twice, shame on me.
  • Those who do not trust are not trusted.

There are two ways to approach the problem of trust in networked digital systems. Each approach creates their own distinct context.

Computer Engineering

The scientific approach looks for a set of laws that can be formulated and tested to provide the desired trust.

In B2B interactions studies have shown that there is an 86% level of correlation between level of assurance and trust. [4]

When this approach does not work in C2B interactions, computer technical people always seem to start by blaming the users and propose educating the users. This approach never winds up meeting the goals.

Social Engineering

The social approach to trust is exploited by con men attacking our human weaknesses every day. Creating trustworthy computer systems can not be overcome this sort of attack. One good example

Only a good user experience is available to a Web Site to start the process of gaining user trust. The site needs to look attractive and as though it were built by a solid company. The section below on Public Relations explores that aspect of trust. The other is branding; which is simply a means to link the current experience to the company's past behavior. In the long run only branding will work, provided that Conduct Risk Problem described below does not tarnish the brand's good name.

Individual men may be moral in the sense that they are able to consider interests other than their own in determining problems of conduct, and are capable, on occasion, of preferring the advantages of others to their own . . . But all these achievements are more difficult, if not impossible, for human societies and social groups. In every human group there is less reason to guide and to check impulse, less capacity for self transcendence, less ability to comprehend the need of others and therefore more unrestrained egoism than the individuals, who compose the groups, reveal in their personal relationships.[5] This problem is the source of much of the unpleasantness of the current internet experience.

Related Contexts

  • Federated Trust is a page that describes another way to address trust by the creation of a trusted sub-net where all members have been vetted by a mutually trusted third party.
  • Reliance is “the act of relying, or the condition or quality of being reliant; dependence; confidence; trust; repose of mind upon what is deemed sufficient support or authority.”[6] In this case trust that the actors will perform as expected.
  • Relying Party is web site that trusts the authority of some identifier providers' assurances or attribute verifiers in making their own trust decision to authorize some access by a user.
  • User Trust of a Web Site is a page that drills into the user experience at a web site that might engender trust.
  • A Trust Service is some Web Site like a Framework Membership Validation service that can report on details of some other Entity or document.


The Psychological Problem

The philosopher Karl Popper defined the psychological problem as one of human effort to discover the truth of a statement.[7] The biggest challenge with that on the internet is that most users do not have the analytical skills to determine whether a headline (which is all they typically view) is true or false according to Professor David Rand.[8]

It is not possible to fully test any Framework until it has been fully specified and perhaps even full implemented. Therefore the selection of elements to be included in any new Framework, or changes to existing Framework, is dependent of the feelings of the decision maker. That’s not to say that we lack stratagems to help make a good guess, but rather that we cannot really know the outcome of the implementation of the Framework in advance

The Technological Problem

It is easy to set up a Web Site that promises to solve the Trust Problem. Several have done so. The challenge is that there is no particular reason to trust a web site just because they claim to provide trust. In general, if some entity needs to tell you about some attribute on the site, the validity of the assertion should be immediately be suspect. Some examples are: (1) the Wells Fargo assertions of trustworthiness immediately after receiving a billion dollar fine,[9] or (2) the release of privacy data by LifeLock after claiming for years to help you protect your User Private Information.[10] Also several new technology initiatives seem to claim to be trust anchors, when all they provide is a means to time-stamp a document. (see the page on Distributed Identity for details.) In general be very wary of any claim made for a technological solution to a Trust problem.

The Philosophical Problem

What does it even mean to Trust that some outcome will occur? Part of the problem is that Trust is context dependent. I am content to trust my money to my banker, but not my brother-in-law. On the other hand I am content to trust my children to my brother-in-law, but not my banker. In his discussion on logic, Karl Popper[11] determines that any logical proof of a statement of fact requires some specialized language where the statements can be made in a manner that does not allow for any ambiguity of meaning. In the internet we see an example of this sort of language in the X.509 Certificate chains that bind the name of a web site to a root certificate of known trustworthiness. There are a set of rules issued by the CA|B forum[12] that specify the conditions under which a web browser can accept a web site as trust worthy. At the end of 2015 when Google discoved that Symantic was not following these rules, they dropped the offending root certificate from the list accepted by the Chrome browser.[13] The strict meaning of the rules were tested, and the rules won in this case. Before any other logically acceptable trust metric is instituted on the internet, a similar strict definition of the context and the language will be needed.

The Ethical Conduct Problem

Ethics and trust are inextricably linked. We are interested in ethics in large part because we are concerned, even obsessed, with the question of who we can trust is a world where there is risk and uncertainty. In our relationships, we humans are much more concerned about assessing trustworthiness of others than we are in trying to figure out how ethical they are. So what is trust and what is trustworthiness? Clearly putting our own well-being into the hands of others requires that all the parties to an interaction share the same traditions, which means sharing some sort of framework in which an ethical decision can be made.[14]

Our lives are embedded in human networks where we need to assess trust at a place and time we need to understand the context, or the framework that applies in that place and time. The Decision to Trust Model (DTM Model) was developed to help us make better decisions about discerning trustworthiness and even repairing trust.

Trustworthiness relates directly to ethics on two specific dimensions: integrity and benevolence. In brief: ‘‘A trustworthy party is one that will not unfairly exploit the vulnerabilities of the other party in the relationship.’’[15]

Trust relationships exist at many levels: between two people, among members of a team, between teams, within an organization, between workers and management and even within an entire system, like the financial system or the air traffic control system. The further removed individuals are from the locus of the relationship, it becomes more complicated to assess trustworthiness. For example, how do you judge the trustworthiness of a bank or a financial system that is saving your money? We would like use a combination of personal and impersonal cues. For example, if we were making a trust judgment about a doctor for surgery, we might assess not only the doctor but also in the doctor's hospital.

Conduct Risk is a new field of auditing that is a response to the huge loss in value of companies like Arthur Anderson, Wells Fargo or Equifax[16] suffered as a result of the manner in which they conducted their business.

The Business Problem

There was a time after WWII when confidence and trust seemed to be at a high point. Business was good, Labor had a say in their working conditions, and most of the population had a rosy view of the future. As explained in a companion page on the Common Good, that all started to fall apart, first with the Korean and Vietnam wars, and then with the age of business take-overs. Now both business groups and identity groups have all decided to look out just for their own interests. Given a dog-eat-dog ecosystem, is it any wonder that people have little trust in business that push all technology that reduces the need for people as a good for society? There is some indication that companies, like GE and Boeing, with too great a concern for the bottom line, now realize that short term profit is not the way to long term sustainability. Even the DealBook of the New York Times[17] tells us
There's an old saying, "Things move at the speed of Trust". ... How could you possibly trust something on a platform if you don't even know if it's true? If there is one singular issue that defines the intersection of business and policy at this moment, it is a deepening trust deficit. ... The words "Trust" and "responsibility" were peppered through out. ... [Brian Chesky] thinks many of us in this [technology] industry over the last 10 years are going from a hands-off model, where the internet is an immune system, to realizing that's not really enough; that we have to take more responsibility for the stuff on our platform.

The Friedman doctrine, also called shareholder theory or stockholder theory, is a normative theory of business ethics advanced by economist Milton Friedman which holds that a firm's sole responsibility is to its shareholders.[18] But on 2019-08-19 Business Roundtable Redefines the Purpose of a Corporation to Promote ‘'An Economy That Serves All Americans[19] In an updated Statement that moves away from Shareholder Primacy to include Commitments to All Stakeholders.

Trusting the Machine

The device of most interest to the human user of any computer system is likely to be the smartphone in their pocket. The next section will consider how this device fits into a network Ecosystem. It can be a challenge for the user to separate the actions of the Smartphone and its operating system from that of the apps running on that phone, in part because the phone is shipped with a variety of apps from: (1) the manufacturer of the phone, (2) the operating system on the phone and (3) the telco that sold the phone to the user. While in some cases the function performed by the device is mechanical, such as taking a picture and displaying it back to the user, the phone has become an Artificial Intelligence in its own right. In a 2022 article Sue Halpern[20] describes the process that enables the US Military warfighters and commanders to trust that a device in the field will perform as expected by the people that sent that soldier into combat. SoarTech has been working with DoD agencies to create a trust model of an AI in the hands of warfighter to understand how the warfighter develops and describes trust in the devices that they employ.

Above and beyond the trust given to the phone by the user, any Relying Party that acquires information from the user needs to be able to trust the device in the user's hands. Several issues arise there from the interpretation of the user's intent into bits one the wire to the details about how the user Credentials on the phone are protected from compromise. But the most important information to the Relying Party is the identification of the phone to the user. There are levels of Assurance that range from identification of the owner of the phone to the actual presence of the user holding the phone at them time the transaction is made, also known a liveness proofing. Inferring all of this Trust from bits on the wire is the subject of much of this wiki.

Trusting the Ecosystem

Humans are predisposed to trusting the natural ecosystem where they live. They expect the sun to rise every day and are not disappointed, even though the very idea of Induction of truths from facts was disparaged by philosophers starting from David Hume.


  • Trust is earned. Where Ethics are lacking Trust cannot survive. They are reciprocal concepts[21] as high trust environments will encourage good ethics of participants and good ethical values in on-line interchanges will build trust.
  • Wittgenstein writes: "Must I not begin to trust somewhere? … somewhere I must begin with non-doubting; and that is not, so to speak, hasty but excusable: it is part of judging."[22] We call that beginning the Framework and further expect that it will also be subject to revision as it is criticized for some failure in the future. The best minds, like Claude Shannon had a method of stripping away the inessentials to get to the core of the problem, and then solve that problem.[23] Trust is that core problem for building a Framework. The following solutions have been tried in the past.

Bootstrapping Trust from the User's Device

How can a User know when Web Site Security is sufficient for them to trust. While that wiki page discusses the issue from the perspective of what the web site can do, this page is what can a user do to create a source of trust that they can use not only to show that they are trustworthy, but that they can use to test the trustworthiness of the web sites that they visit. This all starts with a trusted User Agent running on a trusted User Device, a computer or Smart Phone that can express its own level of security. This is a problem that has been solved many times over the past decades. In 2012 Bryan Parno published an article titled "Trust Extension for Commodity Computers"[24] that described a technology solution based on the TPM chip that had already been installed in 350 million computers. This page describes some the the challenges of deploying such a secure solution as requiring some protected execution environment as could be obtained from a virtual machine environment. While that was prohibitively expensive at the time, current computer systems embed the modern equivalent of the TPM in every Intel or ARM computer chip and Windows 10 laptop computers now run protected code in a virtual machine environment.[25] So the technology exists to bootstrap trust from the user's personal device.

The operational assumption is that the security of any trust statement is determined by the protection and access to the key, which is only accessible in a Trusted Execution Environment. More about trusted hardware will be posted to that page and the page Bootstrapping Identity and Consent for a protocol to use after the user device is known to be trustworthy, and the page Attested for more about hardware trust attestation.

Chain of Trust

For certificates, and most particularly for CCITT X.509 Certificates, the trust of the certificate can chain back to a root of trust certificate. If a Site puts that root of trust certificate into their trust repository, then all of the certificates which chain off of it are also trusted to the extent listed in the certificate itself. The implication of this is that all self-signed certificates must be explicitly included in the trust repository if they are to be trusted.[26] The assumption behind all chains of trust is that every signing key, including the subject's, is well protected as described above.

Transitive Trust

Since most of us do not have the capability to evaluate even a small fraction of the requests that we receive on a daily basis, we are compelled to trust things that are vouched by others as trustworthy. As one example, Microsoft has created a trustworthy computing initiative in 2002.[27] "Fundamental to that decision was the understanding that a company’s greatest asset is customer trust." It sounds like they want us to trust them with our computing decisions. Even for the most trusted decision of all, what software to run in kernel mode on our personal computers, they don't seem to able to perform without error.[28] Still, the only possible decision to be made, if we want to use the modern conveniences of a digital lifestyle, is whether we could make a better decision than companies like Microsoft. It seems clear that most of us will need to find a collection of organizations that we, individually, decide are best capable of looking after our interests and trust whom they tell us to trust.

Root of Trust

Cryptographic Trust

The above solutions to Trust rely on creation of a central point of Trust and creating a protocol to communicate that trust to other participants in the network. In 1982 explorations began using cryptographic solutions to the lack of a central source of trust. The example used was a distributed group of generals who needed to decided on a plan of action while not trusting that the other generals would not renege on the plan for their own benefit.[29] Block Chain solutions with distributed ledger can be used to provide a distributed source of trust by ensuring that a majority of participants agree on the current state of facts by randomizing the participant which gets to assert the current state as used in Bitcoin to prevent double spending of the coins. Bitcoin used a computationally expensive process which would allow the random selection of the participant with the best solution. All of the other participants could validate that first solution to decide if they agreed that is was a valid source of trust. Now there are proposed solutions that are less expensive than proof-of-work.[30]. Any well constructed distributed ledger system can be used to established trust where no one central authority can be trusted.

Sovereign Trust

Governments have the advantage of being immune to any suit where they have not agreed to accept responsibility. That solves one of the major issues with liability that other Identifier or Attribute Providers face, crippling law suits. On the other hand, they are also able to write laws that allow them to ignore individuals' privacy rights in the interests of "national security", whatever it is that they define that do be. Just looking at government issued fiat currency we can see that trust can have a significant impact in the value of the currency, and conversely that the value of the currency can indicate the trust that individual investors have in that currency. It would be nice if there were some metric of trust in identifier credentials like there is for currency, but that is harder to measure. In some cases, like national health insurance, there is no choice but to use the government issued identifier credentials. Fortunately most national governments prevent the use of those credentials for commercial purposes. The eID cards issued in many countries are overcoming that reluctance, possibly to severe allergic reactions from populaces that have bad experiences with those credentials.

See the wiki page Self-Sovereign Identity for a discussion on how to give users sovereignty over their Identifiers.

Public Relations

The most common response to any lack of trust is to hire a public relations expert to generate some for you with a warm and fuzzy User Experience. Consider the case of Purdue Pharmaceutical after their behavior promoting opioid dependency became public knowledge.[31] They began an advertising campaign to try to convince the world how they really care about solving a problem that they created. Or how Facebook manipulates their User Experience to make them seem like a warm and friendly company.[32] The problem is that appearance matters, no much how we try to say it doesn't, but once trust is lost, it is a long slow road to build it back.

Zero Trust

If it is so hard to establish trust, why not just try to run society where every transaction begins with zero trust and acquires sufficient trust to complete the transaction?

This is the notion of a Zero Trust Architecture where we drop the very human experience of trusting everyone in our village because everyone knows everyone else.

Digital Trust Networks

If we can no longer rely one proximity for trust, as was possible at the beginning of the network age where perimeters enclosed everyone that had been vetted, now we create a virtual version of that that is called a Digital Trust Network. In other works, we have created a virtual perimeter for trust networks to replace a virtual perimeter of physical connections.

  • You can listen to some "experts" in the field try to make some sense of this here.

Academic Views of Trust

In 2021 a neuro-psychoeconomic (NPE) model of trust was proposed that synthesizes the neural findings into three types of trust:[33]

  1. calculus-based trust (i.e., derived by calculating of costs and benefits of entering or discontinuing a trust relationship)
  2. knowledge-based trust (i.e., grounded in information processing about the other’s predictability over current knowledge)
  3. identification-based trust (i.e., based on the identification of the other’s intentions and acts over repeated interactions and mutual understand-ing)


  1. NIST Special Publication (SP) 800-160 Volume 2 Revision 1
  2. John Herrman Want to understand all that ails the modern internet? Look to eBay its first megaplatform - and the blueprint for everything that followed. (2018-06-24) The New York Times Magazine p. 14ff
  3. Andrei Hagiu +1, Platforms and the exploration of new products (2018-06-18)
  4. Muneesh Kumar, Trust and Technology in B2B E-Commerce: Practices and Strategies for Assurance Google Books IGI ISBN 978-1613503539
  5. Reinhold Niebuhr, Moral Man and Immoral Society. Scribner’s, (1933)
  6. Merriam Webster 3rd International Dictionary
  7. Karl R. Popper, The Logic of Scientific Discovery English Edition (1959) ISBN 0-415-07892-X
  8. Peter Dizikes, Truth, Lies and Tribal Voters. (2018-11,12) MIT News, part of Technology Review
  9. Donna Borak, Wells Fargo fined $1 billion for insurance and mortgage abuses. (2018-04-20) CNN
  10. Kim Zetter, Lifelock Once Again Failed at Its One Job: Protecting Data (2015-07-21) Wired
  11. Karl Popper, Conjectures and Refutations. (1963) Routledge chapter 9 ISBN 0-415-28594-1
  12. CAB Forum, Baseline Requirements
  13. Larry Loeb, Google No Longer Trusts Symantec’s Root Certificate.
  14. Trust
  15. Banerjee, Bowie and Pavone An Ethical Analysis of the Trust Relationship page 308 in Bachmann and Zaheer eds. Handbook of Trust Research
  16. GAO DATA PROTECTION, Actions Taken by Equifax and Federal Agencies in Response to the 2017 Breach. (2018-08)
  17. Andrew Ross Sorkin, Seeking a Path to Trust. (2019-11-12) New York Times p. F1ff
  18. Jeff Smith, The Shareholders vs. Stakeholders Debate MIT Sloan Management Review (2003-07-15)
  19. Corporate Governance, Business Roundtable Redefines the Purpose of a Corporation to Promote An Economy That Serves All Americans (2019-08-19)
  20. Sue Halpern, Flying Aces New Yorker (2022-01-24) p 18 ff
  21. Linda Fisher Thornton Ethics and Trust are Reciprocal
  22. Ludwig Wittgenstein On Certainty. p. 150 (1972-09-06) ISBN 0061316865
  23. Sat Rana, Claude Shannon: How a Genius Solves Problems.
  24. Bryan Parno, Trust Extension for Commodity Computers CACM 55 No 6 (2012-06) p 76ff
  25. Microsoft. Windows Defender Application Guard overview (2018-07-09)
  26. Microsoft. Certificates [in Widows].
  27. Scott Charney
  28. CEO (unnamed) of Paramount Defenses, Alarming! : Windows Update Automatically Downloaded and Installed an Untrusted Self-Signed Kernel-mode Lenovo Driver on New Surface Device
  29. Lamport, L.; Shostak, R.; Pease, M., The Byzantine Generals Problem. (1982) ACM Transactions on Programming Languages and Systems. 4 (3): 387-389. doi:10.1145/357172.357176
  30. Zubin Koticha, Proof of Stake and the History of Distributed Consensus: Part 1, Nakamoto Consensus, Byzantine Fault Tolerance, Hybrid Consensus, Thunderella. (2018-09-04) Thunder
  31. Barry Meier, Origins of an Epidemic: Purdue Pharma Knew its opioids Were Widely Abused (2018-05-29)New York Times
  32. Evan Selinger, Prof. Philosophy at RIT, Facebook Fabricates Trust Through Fake Intimacy. (2018-06-04)
  33. Yan Wu + 6, Understanding identification-based trust in the light of affiliative bonding: Meta-analytic neuroimaging evidence (2021-12) Pages 627-64

Other Material

  • LIGHTest is a project that is partially funded by the European Commission as an Innovation Action as part of the Horizon2020 program under grant agreement number 700321. LIGHTest‘s objective is to create a Lightweight Infrastructure for Global Heterogeneous Trust management in support of an open Ecosystem of Stakeholders and Trust schemes. We show supported scenarios, motivate the necessity for global trust management and discuss related work. Then we present how LIGHTest addresses the challenges of global trust management, its reference architecture and the pilot applications.
  • Do you Trust me? 2021-19-05