Bearer Tokens Considered Harmful

From MgmtWiki
Jump to: navigation, search

Banned: This paper was rejected by the IDPro organization. Probably because it contains ideas that are not in complete agreement with the folks that fund the IDPro organization. It is a response to Brian Campbell of Ping Identity in their newsletter telling people why they should use Token Binding.

This paper tells you why you should not use Token Binding.

Introduction

This paper discusses the limitations of Channel Binding or other half-way measures to fix a broken security feature introduced with OAuth 2.0,[1] Bearer Tokens.[2] The conclusion is that Bearer Tokens themselves are the problem and we need to be working on finding better ways to authorize the release of resources on the web.

Author: Tom Jones Date: 2018-10-03 revision 2018-11-11

History

Given that the internet had its genesis from a DoD grant, it is curious that security has never been part of the design of any of its parts. Security has always been added on afterwards and we continue to struggle with that poor fit between security and openness today. The internet really only supports machine to machine links. This was codified in the Open System Interconnect model (OSI) by the telecommunications monopolies as a means to propagate their control of telecommunications. At the time I was working with Richard desJardins from the NASA to create a User Guide to OSI (UGOSI), which failed in its effort to make a clear case to the user why the OSI model was good for them. That was just one harbinger of the failure of the internet to address user issues which continues to this day. The first security problem, between different enterprises, was addressed by IPSEC which worked well until one of the machines was in possession of the user and could be connected to the internet at any point. Shared secrets between different enterprises no longer works for devices that moved beyond the control of the enterprise.

With the introduction of the user to the security issue, IPSEC (and OSI) was hacked with Channel Binding in RFC 5056 (released 2007-11) which crosses almost all of the OSI levels (from 2 to 7) to give the user control of the secrets used to establish the security channel. This hack has working well for client computers that are attached to a "home" network, in effect allowing the client computer to be treated as "local" to the enterprise networks and inside the enterprise firewall, protected from the hostile internet. Of course the hack was incomplete in that the user controlled client computer could also attach to the raw internet which was the source of external infection vectors. Microsoft introduced a version of channel binding which could also use HTTPS (SSL) connections in Extended Authentication Protocol (EAP) in 2009[3] to address Man-in-the-Middle attacks.[4] This created several problems, including the one where the SSL connection was terminated at an edge computer and could not be known at the service computer. That was addressed by another hack, Service Binding patented by Mark Novak[5] where a clear text client service binding value is received from a client at the target server, the client service binding value is compared to a server service binding value, and a communication channel is formed between the client and the target server when the client service binding value matches the server service binding value. The overriding assumption is still that the enterprise controls security.

OAuth 1.0 was introduced to provide a simple means to access a user's resources on the web without requiring the user to give up their signin credentials. While that worked in theory, the solution was complex and so achieved limited uptake.

Problems

It was into this environment that OAuth 1.0 (using a convoluted version of shared secrets) morphed into OAuth 2.0 (using public key) which was still based on one computer talking to another computer. Among the many fields that could be addressed was the HTTP header with the type of authorization used, (e.g. Authorization: Bearer mF_9.B5f-4.1JqM). Unfortunately only type "Bearer" is actually supported by any existing implementation. So, in order to expand the functionality of authorization, all modifications to date have been to hack the bearer token in some way to make it more secure. The final solution has been Token Binding[6] only the first of many standards is listing in the draft RFC of 3 to 5 depending on how you count. If you have been counting this is now the hack of a hack of hack. Also known as the great grand hack. But the real problem with the latest (token binding) hack is that while the earlier hacks could be implemented at the Enterprise level by the same development team, token binding requires that all developers of internet solutions implement the hack with no security vulnerabilities. That is certainly something that has never worked in the past.

The assumption that binding the token to the HTTPS channel will in some manner assure that only sites trusted by the user are able to access user information is flawed. The Identifier in the HTTPS channel is simply the DN in an X.509 Certificate. That certificate does chain up to a root authority which is trusted by the browser manufacturer, which offers some level of Assurance, but says little about the real-world Entity behind the Web Site. It is possible for the site to acquire an EV Cert which will provide some Assurance that the site is grounded in some real-world address, but none as to the trust that the user should place in that Entity.

Now we have a large number of people using OAuth 2.0, but increasing evidence that not only can Facebook not get it right,[7][8] but the UK Open Banking community is not convinced that bearer tokens are acceptable for payment protocols. Note that Facebook acknowledged that they "cannot fix this" as early as 2014[9] but they again have promised to find a solution. Does anyone still believe that it will be possible for them to do that?

To be clear, the bearer token worked as designed; it gave the holder access to a single resource. What all the binding stuff is designed to do is assure that the access is only available to the user that was given the grant. So there are two problems here: (1) the bearer token is not bound to the user and the resource site, (2) the access granted by the token is to a resource that allows additional privilege grants beyond itself. In the case of Facebook, and many other sites that I have seen, the resource even allowed the user to impersonate other users. In Facebook this was benignly labeled as the "View-As" feature. In European open banking proposals a payment initiator can impersonate the user to get money from their account. I don't believe that any of the OpenID or OAuth standards allows Impersonation, but that doesn't stop developers from thinking that they need it and are smart enough to control it. But neither is true in practice.

Solution

The obvious solution is a different token type in OAuth 2.0, or perhaps even a different version of OAuth, I guess 3.0. The obvious objection will be that "everybody is using OAuth 2.0 with bearer, we have no choice." The obvious answer is "bollocks, let's do this thing right!" Besides, if so few developers are able to handle the security complexity of OAuth 2.0 as it is, then we would be better off with something new that has securely bound to the user, or perhaps to the user's device. Given the ubiquitous deployment of computers with trusted execution environment, the later should be eminently practical. Of course FIDO U2F could provide this functionality as well, so perhaps the predicted wide deployment of web authentication will provide an answer. That protocol does require a binding of the web site to the user token. While that does require a trusted user agent, we know that Android, at least, is committed to validating the source of any app that validates the site binding. Apple seems to take user issues seriously, so there is a good chance they will follow suit.

In summary, OAuth 2.0 has wide adoption across a wide range of deployments. This success is fantastic and somehow it should be leveraged to lead us to the next level of security. I personally have no confidence in the complex binding protocols now being proposed and strongly recommend a new token design with binding as a part of the token itself, not in some separate process that a developer needs to get right for a deployment to be secure. I would also explicitly ban the use of Impersonation. The opposite view is that the OAuth 2.0 standards are successful precisely because they are flexible and not too hard line on security. I suspect it is obvious that I tend to be hard line on security, which is what applications like banking require.

What's in a Name

While the Bearer Token does not define any internal structure, it does state (RFC 6750) that " Any party in possession of a bearer token (a "bearer") can use it to get access to the associated resources (without demonstrating possession of a cryptographic key). To prevent misuse, bearer tokens need to be protected from disclosure in storage and in transport." It is precisely this statement that causes most security experts to disavow Bearer Tokens completely. AFAIK this statement is not disavowed in any profile that dependents on RFC 6750.

Several profiles have been defined that define specific additional syntax and semantics for the use of the token in that context. For example the HEART profile for OAuth 2.0 provides for mandatory fields specifying the source and destination of the token, it still is a Bearer Token by name and more to the point, by reputation. In the absence of any revision to OAuth 2.0 it seems prudent to stop use of the the term 'bearer' and the HTML value 'bearer'. Note that the HTML use of 'bearer' only applies to front-channel redirects, but that does not remove the taint that 'bearer' carries with it by virtue of the preamble quoted above.

While Shakespeare insisted that "A rose by any other name would smell as sweet". This sentiment does not seem to apply to systems and security architects who read the specs and make their decisions based on the words in the spec.

References

  1. D. Hardt, The OAuth 2.0 Authorization Framework. RFC 6749
  2. M. Jones, D. Hardt, The OAuth 2.0 Authorization Framework: Bearer Token Usage. RFC 6750
  3. Microsoft SWI, Extended Protection for Authentication. (2009-12-08) https://blogs.technet.microsoft.com/srd/2009/12/08/extended-protection-for-authentication/
  4. Microsoft, Man in the Middle. https://msdn.microsoft.com/en-us/library/cc247407.aspx
  5. Mark Novak +1, Service Binding. Patent (2014-09-30) us 8850553
  6. A. Popov +5, Token Binding over HTTP (approved but not yet released RFC) https://datatracker.ietf.org/doc/draft-ietf-tokbind-https/
  7. Thomas Brewster, How Facebook Was Hacked And Why It's A Disaster For Internet Security. (2018-09-28)Forbes https://www.forbes.com/sites/thomasbrewster/2018/09/29/how-facebook-was-hacked-and-why-its-a-disaster-for-internet-security/#5a64b0b82033
  8. Issie Lapowsky, The Facebook Hack Exposes an Internet-Wide Failure. (2018-10-02) Wired https://www.wired.com/story/facebook-hack-single-sign-on-data-exposed/?CNDID=45183233&mbid=nl_100218_daily_list1_p4
  9. Wang Wei, Hacking Facebook User 'Access Token' with Man-in-the-Middle Attack (2014-03-11)The Hacker News https://thehackernews.com/2014/03/hacking-facebook-user-access-token-with.html