OAuth 2.0

From MgmtWiki
Revision as of 16:33, 7 August 2025 by Tom (talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Full Title or Meme

The OAuth 2.0 Authorization Framework

Context

In OAuth 2.0

Problems

  • OAuth 2.0 still depends on shared secrets between services on Web Sites and other internet devices;[1] while most sites are protected by public keys and certificates, at least until Quantum Computing Threat arrives.
  • It is still just a collection of parts that can be configured in a wide variety of combinations; most of which are not particularly secure.
  • Token type "bearer" is still the only one used in real-world implementations. See the page Bearer Tokens Considered Harmful on this wiki.
  • The redirect URL is not well specified in the spec and is subject many exploits. The problem is poor implementations and reuse of each client id across many implementations.
  • HTTP refer header is usually sent in the clear and contains way too much information in Front Channel implementations.
  • Security UX is complicated and not described in the spec.
  • State parameters are needed for security, but not required by the spec.
  • A bunch of specs implemented other ways to enable secure, such as UMA, PCKE, etc.
  • Implicit flow was added to enable javascript, but recent innovations in browser has weaken the existing very weak security.
  • Resource Owner password turned into a really bad idea. It should be banned.
  • Scopes were created, but not explicitly defined, so there is no way to determine what a scope actually means.
  • Discovery is not well defined, but always leaks information.

Security Controversy

OAuth 1.0 vs OAuth 2.0 — and the controversy that shook the standards world

Most developers today are familiar with OAuth 2.0 — the framework behind “Login with Google” or API access via bearer tokens. But fewer remember that OAuth 2.0 was born out of a rather dramatic evolution.

OAuth 1.0 was cryptographically secure by design. Every request was signed with HMAC-SHA1 (or RSA), ensuring message integrity and protecting against replay attacks — even over untrusted networks.

Then came OAuth 2.0: simpler, more flexible, better suited for mobile and public clients. But it also abandoned signatures in favor of bearer tokens and shifted all trust to HTTPS.

That shift sparked major controversy. In 2012, Eran Hammer — the editor of the OAuth 2.0 spec — publicly resigned from the working group, calling OAuth 2.0 “a bad protocol” and “a recipe for security disasters.” His key criticisms:

• OAuth 2.0 was too open-ended — leading to fragmentation and non-interoperability
• It offloaded security to implementers, many of whom weren’t equipped for it
• It prioritized ease of implementation over protocol integrity

In time, the industry adopted OAuth 2.0 regardless — and patched in best practices like PKCE, refresh tokens, and token revocation. OAuth 2.1 (in draft) now seeks to consolidate these lessons.

But the controversy remains a powerful reminder: Simplicity isn’t free. It often comes at the cost of security, clarity, or control.

  • From Tom Jones Yes OAuth 2.0 suffered from weak security,
    • But more to the point it is an authorization for the Relaying Party (Confusingly called the user's client) to get access to the users private information.
    • This is the wrong paradigm and is the source for privacy problems. Lets switch away from leaking user private information to just getting secure access tokens.
    • This is the basis for OID4VP which is clearly not aligned with user informed consent.

Dick Hardt's Commentary

Eran never liked what became OAuth 2.0. He factored out the bearer token into 6750 because he did not want his name associated with it. That separation made it harder for implementers to understand -- and is something we are undoing in the 2.1 spec.

A design goal of OAuth 1.0 was to not require TLS -- because TLS was too hard. As it turned out, doing the cryptography was hard in OAuth 1.0 which made it hard to deploy.

As Michael Jones calls out -- OAuth is designed for authorization, not login. Facebook Connect used OAuth 2.0 for their APIs, and people started to call the API to get the user id, and that pattern became a way for login which led to a number of security issues as the security requirements for login are different than resource access.

With respect to security, OAuth 1.0 had a large hole on release and OAuth 1.0A was released to patch it. Who knows if OAuth 1.0A would have proven itself to be secure -- the adoption was nominal.

Using HTTPS has never to my knowledge been one of the security vulnerabilities in OAuth 2.0.

Allowing public clients, driven by Facebook and one of the other editors, was a mistake that was corrected with PKCE.

Refresh tokens were part of the original spec -- it was not added on.

Michael Jones = Thanks, Dick. I'd forgotten that a design point for OAuth 1.0 was not requiring TLS. That's hard to fathom now but at the time, the majority of Web sites were using http and not https. The Web has certainly evolved.

Solutions

References

  1. RFC 6749 The OAuth 2.0 Authorization Framework specification
  2. RFC 8252 OAuth 2.0 for Native Apps Specification
  3. Proof Key for Code Exchange wiki page
  4. Proof Key for Code Exchange (PKCE) - Micah Silverman Okta - Implement the OAuth 2.0 Authorization Code with PKCE Flow
    1. Justin Richer, What's Wrong With OAuth 2? https://twitter.com/justin__richer/status/1023738139200778240
    2. Justin Richer, Moving On from OAuth 2: A Proposal. https://medium.com/@justinsecurity/moving-on-from-oauth-2-629a00133ade