Trust as it is evaluated in actual operation
Thread on certification from OpenID
This tread was captured from the AB/C work group of OpenID
David Waite via Openid-specs-ab. May 13, 2021, 8:55 AM
Adrian brought two good points on the SIOP Atlantic call today, but we unfortunately ran out of time.
First, the most easily discussed - trust frameworks are perhaps not the clearest term for the concept. In this context, the reference is to a body that makes a set of technical and non-technical requirements necessary for interoperability within a group, where that group is commonly referred to as a federation.
If another existing term is usable, I’d be all for considering it.
His second point, if I understood correctly, comes to whether a trust framework which attempts to audit/certify participants is compatible with various community goals, such as user choice in wallet software and general self-soverignity. This is most likely the longer conversation.
We’ve learned from experiences with Web Authentication, Web Payments and financial-grade API efforts that parties will have minimal requirements around things like user experience and security to adopt a system. Such federations may require a closed system, where only certified issuers, holders and verifiers are allowed to participate. In the worst case, a party may be blocked from participation by biased governance.
In the healthcare space (which I’m NOT an expert in by any means) the verifier may need to know whether or not a holder’s informed user consent process meets regulatory requirements before accepting a presented credential.
The goal would be to support both a model where participation is gated by the governance, auditing and certification processes of a federation, and a model where participation is via self-certification. This would be for all roles - issuers, verifiers and holders.
I lean toward more open participation where possible, and the hope would be that the simplicity of self-certification vs the maintenance of auditing/certification processes would be sufficient motivation to create open systems by default.
-DW _______________________________________________ Openid-specs-ab mailing list Openidemail@example.com http://lists.openid.net/mailman/listinfo/openid-specs-ab
Tom Jones <firstname.lastname@example.org> May 13, 2021, 9:02 AM
We have plenty of evidence that a fully open system cannot be known to be secure. So the choice is pretty much exclusive, meaning no overlap.
I have been using the term trust authority for the end point where both documentation and trust assertions can be retrieved. While I am not found of it, it does work.
thx ..Tom (mobile)
David Chadwick via Openid-specs-ab
Speaking as a VC software provider I obviously prefer self-certification as this makes it much cheaper for me to provide software to customers. But this path is fraught with difficulties because some suppliers will automatically cut corners in order to undercut the market. We experienced this in the 1990 when PKI was just starting. I acted as a consultant to the UK PO who provided a first class CA service with warranties and obligations to its customers, including guaranteed payments to RPs if they screwed up on authenticating a user. A PKC from the UK PO provided high assurance and a high level of trust. But the service cost money. Shortly afterwards Verisign appeared offering free PKCs to people. And within a few years the UK PO shut down its CA service as it was not profitable. But Verisign grew and grew and eventually started charging for its PKCs. But the original Verisign PKCs were valueless. I applied for a Bill Gates PKC in the late 1990s and got one. I used this in my security lectures at Kent for many years, until Verisign eventually sold their service to the current owners, who stopped issuing "persona non-validated PKCs". After several years the CAB forum started, and produced rules for issuing PKCs (DV, OV and EV ones). Under their rules a PKC now became something you could trust, which was the original intention of the X.509 model.
So to conclude, I don't believe self-certification will work. Operators will hit the market to grab market share offering cheap and shoddy products with all sorts of privacy and security loopholes that customers will not be aware of, until they are hit by them. I think trust frameworks (or certification schemes) are going to be essential in order to not tarnish the image of VCs (I prefer the term VCs to SSI, because SSI is a myth, a dream that can never truly happen. "No man is an island" even though SSI likes to believe that everyone can be.)
Tom Jones. 9:50 AM
we need to pry apart some terms here. Before we can discuss, let alone choose a path, we must be clear on the terms that we use. I will go back to the levels of assurance that I continue to believe are required together with a clear idea for the conditions at each level.
0 self-assertion - this is typically sufficient to establish a binding between two parties. The trust comes (as it always does) from a history of good behavior. And, as with other trust metrics, trust takes a long time to build and one bad action to destroy.
1 self-test - this is what is used in openID and now in did method registration. The governance body creates the test and the developer runs the test and provides evidence of compliance to a semi-automatic evaluation process.
2 one-time audit - this is used by the CA|B forum to test review the policies, procedures and actual operations of the entity. It does not test the specific instance at the specific time of operations.
3 continuous audit - this is used by trusted hardware (such as TPM 2 code running in a secure enclave) to assure that current operation continues to meet the certification criteria. This test is re-evaluated at every session initiation or at specific high-value transactions.
Zero trust - means that every session, (and possibly every interaction) re-evaluates the trust measures for the subject and the resource requirements.
Also note that certification is a necessary condition for trust, but not sufficient. There still needs to be a root of trust. In the CA|B case the root of trust is established by the browser, which can withdraw trust from specific bad actions as they become manifest.