OAuth 2.0

From MgmtWiki
Jump to: navigation, search

Full Title or Meme

The OAuth 2.0 Authorization Framework

Context

In OAuth 2.0

Problems

  • OAuth 2.0 still depends on shared secrets between services on Web Sites and other internet devices;[1] while most sites are protected by public keys and certificates, at least until Quantum Computing Threat arrives.
  • It is still just a collection of parts that can be configured in a wide variety of combinations; most of which are not particularly secure.
  • Token type "bearer" is still the only one used in real-world implementations. See the page Bearer Tokens Considered Harmful on this wiki.
  • The redirect URL is not well specified in the spec and is subject many exploits. The problem is poor implementations and reuse of each client id across many implementations.
  • HTTP refer header is usually sent in the clear and contains way too much information in Front Channel implementations.
  • Security UX is complicated and not described in the spec.
  • State parameters are needed for security, but not required by the spec.
  • A bunch of specs implemented other ways to enable secure, such as UMA, PCKE, etc.
  • Implicit flow was added to enable javascript, but recent innovations in browser has weaken the existing very weak security.
  • Resource Owner password turned into a really bad idea. It should be banned.
  • Scopes were created, but not explicitly defined, so there is no way to determine what a scope actually means.
  • Discovery is not well defined, but always leaks information.

Security Controversy

OAuth 1.0 vs OAuth 2.0 — and the controversy that shook the standards world

Most developers today are familiar with OAuth 2.0 — the framework behind “Login with Google” or API access via bearer tokens. But fewer remember that OAuth 2.0 was born out of a rather dramatic evolution.

OAuth 1.0 was cryptographically secure by design. Every request was signed with HMAC-SHA1 (or RSA), ensuring message integrity and protecting against replay attacks — even over untrusted networks.

Then came OAuth 2.0: simpler, more flexible, better suited for mobile and public clients. But it also abandoned signatures in favor of bearer tokens and shifted all trust to HTTPS.

That shift sparked major controversy. In 2012, Eran Hammer — the editor of the OAuth 2.0 spec — publicly resigned from the working group, calling OAuth 2.0 “a bad protocol” and “a recipe for security disasters.”

His key criticisms:

• OAuth 2.0 was too open-ended — leading to fragmentation and non-interoperability
• It offloaded security to implementers, many of whom weren’t equipped for it
• It prioritized ease of implementation over protocol integrity

In time, the industry adopted OAuth 2.0 regardless — and patched in best practices like PKCE, refresh tokens, and token revocation. OAuth 2.1 (in draft) now seeks to consolidate these lessons.

But the controversy remains a powerful reminder: Simplicity isn’t free. It often comes at the cost of security, clarity, or control.

  • From Tom Jones Yes OAuth 2.0 suffered from weak security,
    • But more to the point it is an authorization for the Relaying Party (Confusingly called the user's client) to get access to the users private information.
    • This is the wrong paradigm and is the source for privacy problems. Lets switch away from leaking user private information to just getting secure access tokens.
    • This is the basis for OID4VP which is clearly not aligned with user informed consent.

Solutions

References

  1. RFC 6749 The OAuth 2.0 Authorization Framework specification
  2. RFC 8252 OAuth 2.0 for Native Apps Specification
  3. Proof Key for Code Exchange wiki page
  4. Proof Key for Code Exchange (PKCE) - Micah Silverman Okta - Implement the OAuth 2.0 Authorization Code with PKCE Flow
    1. Justin Richer, What's Wrong With OAuth 2? https://twitter.com/justin__richer/status/1023738139200778240
    2. Justin Richer, Moving On from OAuth 2: A Proposal. https://medium.com/@justinsecurity/moving-on-from-oauth-2-629a00133ade