Difference between revisions of "Verifier Management"

From MgmtWiki
Jump to: navigation, search
(FedCM)
(FedCM)
Line 15: Line 15:
  
 
* Sam Goto -> Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area), I think that, if you look through the lenses of the design of incentives, this is indeed something that we are still gathering validation. So far, it seems to strike a good balance, but I think you are right in that this introduces an extra game theoretical position that can be questioned.
 
* Sam Goto -> Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area), I think that, if you look through the lenses of the design of incentives, this is indeed something that we are still gathering validation. So far, it seems to strike a good balance, but I think you are right in that this introduces an extra game theoretical position that can be questioned.
+
** This endpoint also has no CSRF protection, so risks leaking PII more generally (eg to any origin that has been CORS-allowlisted).
This endpoint also has no CSRF protection, so risks leaking PII more generally (eg to any origin that has been CORS-allowlisted).
+
** As far as CSRF goes, we expose a Sec-Fetch-Dest HTTP request, which is a forbidden request header (meaning that it can't be polyfilled in userland).
 
 
As far as CSRF goes, we expose a Sec-Fetch-Dest HTTP request, which is a forbidden request header (meaning that it can't be polyfilled in userland).
 
  
 
https://fedidcg.github.io/FedCM/#sec-fetch-dest-header
 
https://fedidcg.github.io/FedCM/#sec-fetch-dest-header
Line 26: Line 24:
  
 
* Sam Goto -> Yep, that sounds reasonable to me. For the most part, we think of JS APIs and HTTP request are largely isomorphic in the important parts (again, privacy/security wise), and we can expose either/both purely based on ergonomics (as you suggest), so yeah, if this makes it easier for developers, it is easy to make it happen, I think.
 
* Sam Goto -> Yep, that sounds reasonable to me. For the most part, we think of JS APIs and HTTP request are largely isomorphic in the important parts (again, privacy/security wise), and we can expose either/both purely based on ergonomics (as you suggest), so yeah, if this makes it easier for developers, it is easy to make it happen, I think.
 
  
  

Revision as of 11:24, 9 May 2024

Full Title or Meme

Whenever the user of a computing device connects to an Entity by navigating to their web site, each party engages in a negotiation to determine the basis for a continuing relationship between the parties.

FedCM

  • Neil Madden -> Thanks for these slides and recording. This is a fascinating proposal. I have plenty of potential thoughts and comments to digest, but I guess the most fundamental is that this spec assumes that users and IdPs will be happy for their browser to be a trusted party involved in login flows.
  • Sam Goto -> Yep, that is, indeed, the privacy and security threat model that we (FedCM specifically, Web Platforms API in general) use: the user agent is a trusted party.
  • Neil -> In particular, the call to the accounts endpoint assumes that the IdP is willing to provide PII about the user to the browser. That seems questionable.
  • Sam Goto -> Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area), I think that, if you look through the lenses of the design of incentives, this is indeed something that we are still gathering validation. So far, it seems to strike a good balance, but I think you are right in that this introduces an extra game theoretical position that can be questioned.
    • This endpoint also has no CSRF protection, so risks leaking PII more generally (eg to any origin that has been CORS-allowlisted).
    • As far as CSRF goes, we expose a Sec-Fetch-Dest HTTP request, which is a forbidden request header (meaning that it can't be polyfilled in userland).

https://fedidcg.github.io/FedCM/#sec-fetch-dest-header


  • Neil -> As another general comment, I'd say that if you want this to be easy for RPs to apply to existing login flows then it needs to be something that is easy to configure/initiate via a reverse proxy. That would suggest HTTP header-based rather than a JS API in my opinion.
  • Sam Goto -> Yep, that sounds reasonable to me. For the most part, we think of JS APIs and HTTP request are largely isomorphic in the important parts (again, privacy/security wise), and we can expose either/both purely based on ergonomics (as you suggest), so yeah, if this makes it easier for developers, it is easy to make it happen, I think.


On 8 May 2024, at 17:52, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.madden@gmail.com> wrote: Thanks for these slides and recording. This is a fascinating proposal. I have plenty of potential thoughts and comments to digest, but I guess the most fundamental is that this spec assumes that users and IdPs will be happy for their browser to be a trusted party involved in login flows.

Yep, that is, indeed, the privacy and security threat model that we (FedCM specifically, Web Platforms API in general) use: the user agent is a trusted party.

I’m sure browser developers do of course view their own products as trustworthy, but not everyone does. Episodes like [1] do provoke some distrust. Especially in corporate environments where users are forced to use a particular user-agent (and may be subject to mitm proxies), this may not be a universally accepted threat model.

[1]: https://www.theverge.com/2023/4/25/23697532/microsoft-edge-browser-url-leak-bing-privacy


In particular, the call to the accounts endpoint assumes that the IdP is willing to provide PII about the user to the browser. That seems questionable.

Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area)

Sure, but if I use the recommended auth code flow or encrypted ID tokens, then PII is not exposed to the browser. And it’s not just the browser itself in the current proposal, as the token is exposed to javascript, of course, so the usual XSS risks.

, I think that, if you look through the lenses of the design of incentives, this is indeed something that we are still gathering validation. So far, it seems to strike a good balance, but I think you are right in that this introduces an extra game theoretical position that can be questioned.

I guess a related question is whether browser vendors are intending for this to become the only game in town for cross-site authentication? If not then those with differing threat models can use other mechanisms. But if the plan is to eventually completely block all other federation protocols then it needs to work for all use cases.


This endpoint also has no CSRF protection, so risks leaking PII more generally (eg to any origin that has been CORS-allowlisted).

As far as CSRF goes, we expose a Sec-Fetch-Dest HTTP request, which is a forbidden request header (meaning that it can't be polyfilled in userland).

https://fedidcg.github.io/FedCM/#sec-fetch-dest-header

Ok, that is good. But it feels like something that IdPs could easily forget to enforce. In general, being one missed security header check away from a PII data leak seems not a fun place to be for an IdP.

As another general comment, I'd say that if you want this to be easy for RPs to apply to existing login flows then it needs to be something that is easy to configure/initiate via a reverse proxy. That would suggest HTTP header-based rather than a JS API in my opinion.

Yep, that sounds reasonable to me. For the most part, we think of JS APIs and HTTP request are largely isomorphic in the important parts (again, privacy/security wise), and we can expose either/both purely based on ergonomics (as you suggest), so yeah, if this makes it easier for developers, it is easy to make it happen, I think.


— Neil


Sam Goto <goto=40google.com@dmarc.ietf.org> Wed, May 8, 11:04 AM (22 hours ago) to Neil, OAuth


On Wed, May 8, 2024 at 10:45 AM Neil Madden <neil.e.madden@gmail.com> wrote:

On 8 May 2024, at 17:52, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.madden@gmail.com> wrote: Thanks for these slides and recording. This is a fascinating proposal. I have plenty of potential thoughts and comments to digest, but I guess the most fundamental is that this spec assumes that users and IdPs will be happy for their browser to be a trusted party involved in login flows.

Yep, that is, indeed, the privacy and security threat model that we (FedCM specifically, Web Platforms API in general) use: the user agent is a trusted party.

I’m sure browser developers do of course view their own products as trustworthy, but not everyone does.

The architecture of the web is constructed in such a way that a user can (and, in fact, do) change user agents if they stop representing you. Same (in terms of the economics and privacy/security threat model) goes for your operating system and your hardware. From a security threat model perspective, it also largely assumes that the user agent (including the OS and the hardware) is trusted by the user.

Episodes like [1] do provoke some distrust. Especially in corporate environments where users are forced to use a particular user-agent (and may be subject to mitm proxies), this may not be a universally accepted threat model.

[1]: https://www.theverge.com/2023/4/25/23697532/microsoft-edge-browser-url-leak-bing-privacy


In particular, the call to the accounts endpoint assumes that the IdP is willing to provide PII about the user to the browser. That seems questionable.

Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area)

Sure, but if I use the recommended auth code flow or encrypted ID tokens, then PII is not exposed to the browser.

From a privacy/security threat model perspective, again, If PII is rendered in the DOM, it is exposed to the browser rendering it. When an IdP renders a page with the user's personal information, that's exposed to the browser (in the same way that a HTTP request would).

And it’s not just the browser itself in the current proposal, as the token is exposed to javascript, of course, so the usual XSS risks.

Yeah, XSS is a risk (extensions, particularly, come to mind), but not one that is bigger than the status quo (e.g. extension can intercept top level redirects too).

In fact, with a high-level API (such as FedCM), you can constrain the scope of the memory footprint in ways that low-level APIs (e.g. top level redirects, iframes and pop-up windows) can't, so I only expect FedCM to provide a much higher security bar than the alternatives.


, I think that, if you look through the lenses of the design of incentives, this is indeed something that we are still gathering validation. So far, it seems to strike a good balance, but I think you are right in that this introduces an extra game theoretical position that can be questioned.

I guess a related question is whether browser vendors are intending for this to become the only game in town for cross-site authentication?

It is not clear, it is probably too soon to tell either way. Tracking on the web has a lot of moving parts, and not all of them have settled.

If not then those with differing threat models can use other mechanisms. But if the plan is to eventually completely block all other federation protocols then it needs to work for all use cases.


This endpoint also has no CSRF protection, so risks leaking PII more generally (eg to any origin that has been CORS-allowlisted).

As far as CSRF goes, we expose a Sec-Fetch-Dest HTTP request, which is a forbidden request header (meaning that it can't be polyfilled in userland).

https://fedidcg.github.io/FedCM/#sec-fetch-dest-header

Ok, that is good. But it feels like something that IdPs could easily forget to enforce. In general, being one missed security header check away from a PII data leak seems not a fun place to be for an IdP.

Yeah, agreed. I'd love to hear about other ways that we could make this endpoint more secure.


Sam Goto <goto=40google.com@dmarc.ietf.org> Wed, May 8, 11:48 AM (21 hours ago) to Neil, OAuth


On Wed, May 8, 2024 at 11:03 AM Sam Goto <goto@google.com> wrote:


On Wed, May 8, 2024 at 10:45 AM Neil Madden <neil.e.madden@gmail.com> wrote:

On 8 May 2024, at 17:52, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.madden@gmail.com> wrote: Thanks for these slides and recording. This is a fascinating proposal. I have plenty of potential thoughts and comments to digest, but I guess the most fundamental is that this spec assumes that users and IdPs will be happy for their browser to be a trusted party involved in login flows.

Yep, that is, indeed, the privacy and security threat model that we (FedCM specifically, Web Platforms API in general) use: the user agent is a trusted party.

I’m sure browser developers do of course view their own products as trustworthy, but not everyone does.

The architecture of the web is constructed in such a way that a user can (and, in fact, do) change user agents if they stop representing you. Same (in terms of the economics and privacy/security threat model) goes for your operating system and your hardware. From a security threat model perspective, it also largely assumes that the user agent (including the OS and the hardware) is trusted by the user.

Episodes like [1] do provoke some distrust. Especially in corporate environments where users are forced to use a particular user-agent (and may be subject to mitm proxies), this may not be a universally accepted threat model.

Enterprise admins are considered privileged, so also outside of the browser's ability to guard against physically local attacks (including public/shared computers). Some of these are derived from these, e.g. "A computer is only as secure as the administrator is trustworthy.".

I hear you, these [1] are problematic, but just wanted to be transparent about how browser engineers generally think about these threats and which Security threat models are used.


Neil Madden neil.e.madden@gmail.com via ietf.org Wed, May 8, 1:25 PM (20 hours ago) to Rifaat, oauth

Looking at these slides again, and at the spec, does this even work to defeat tracking? The browser makes two requests to the IdP prior to getting consent from the user:

1. To lookup the accounts of the user (identifying the user) 2. To lookup the metadata of the client (identifying the RP).

Isn’t it rather trivial for a tracker posing as an IDP to correlate these two requests? The privacy considerations talk about IP addresses and timing ways to correlate, but there are plenty of others.

— Neil

On 8 May 2024, at 13:34, Rifaat Shekh-Yusef <rifaat.s.ietf@gmail.com> wrote:

 <IETF-OAuthInterim24-FedCM.pdf> _______________________________________________ OAuth mailing list -- oauth@ietf.org To unsubscribe send an email to oauth-leave@ietf.org


Joseph Heenan via ietf.org Wed, May 8, 1:34 PM (20 hours ago) to Neil, <oauth@ietf.org>

Hi Neil


On 8 May 2024, at 18:45, Neil Madden <neil.e.madden@gmail.com> wrote:


On 8 May 2024, at 17:52, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.madden@gmail.com> wrote:


In particular, the call to the accounts endpoint assumes that the IdP is willing to provide PII about the user to the browser. That seems questionable.

Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area)

Sure, but if I use the recommended auth code flow or encrypted ID tokens, then PII is not exposed to the browser. And it’s not just the browser itself in the current proposal, as the token is exposed to javascript, of course, so the usual XSS risks.

Sam’s response here is fair, but also note that as far as I understand it you can still use the authorization code flow or encrypted id tokens with the FedCM API - this is likely one of the things we need to talk about as we discuss how OAuth2 & OpenID Connect are profiled to work with the new API.

Joseph


<oauth@ietf.org>, Sam Goto (chromium.org) Y'all are missing another option. (re Sam's Comments) The user is given a user agent to use when accessing enterprise data. I know of two that are shipping today. https://www.getprimary.com/ Now it is possible to access other sites with these browsers as well - in fact they specifically support sites like github in such a way to get enterprise creds. Perhaps we need to test these mechanisms with those user agents? Is it helpful if i asked them for feedback - when this gets to blink they will see them then as they are chromium variants. That might cause them to change the FedCM code.

Interested in the problem of multiple IdPs. I jrun three different browsers now just to deal with that problem.

..tom


On Wed, May 8, 2024 at 1:34 PM Joseph Heenan <joseph@authlete.com> wrote: Hi Neil


On 8 May 2024, at 18:45, Neil Madden <neil.e.madden@gmail.com> wrote:


On 8 May 2024, at 17:52, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.madden@gmail.com> wrote:


In particular, the call to the accounts endpoint assumes that the IdP is willing to provide PII about the user to the browser. That seems questionable.

Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area)

Sure, but if I use the recommended auth code flow or encrypted ID tokens, then PII is not exposed to the browser. And it’s not just the browser itself in the current proposal, as the token is exposed to javascript, of course, so the usual XSS risks.

Sam’s response here is fair, but also note that as far as I understand it you can still use the authorization code flow or encrypted id tokens with the FedCM API - this is likely one of the things we need to talk about as we discuss how OAuth2 & OpenID Connect are profiled to work with the new API.

Joseph

_______________________________________________ OAuth mailing list -- oauth@ietf.org To unsubscribe send an email to oauth-leave@ietf.org

Sam Goto <goto=40google.com@dmarc.ietf.org> Wed, May 8, 1:42 PM (19 hours ago) to Neil, oauth


On Wed, May 8, 2024 at 1:26 PM Neil Madden <neil.e.madden@gmail.com> wrote: Looking at these slides again, and at the spec, does this even work to defeat tracking? The browser makes two requests to the IdP prior to getting consent from the user:

1. To lookup the accounts of the user (identifying the user) 2. To lookup the metadata of the client (identifying the RP).

Isn’t it rather trivial for a tracker posing as an IDP to correlate these two requests? The privacy considerations talk about IP addresses and timing ways to correlate

The timing attack is the one that we think we are most vulnerable to at this layer, but we know how to (a) detect it and (b) address it (e.g. by introducing UX friction).

IP addresses are also a problem, but we think it will be best addressed at a different layer:

For example, in Chrome: https://developers.google.com/privacy-sandbox/protections/ip-protection

and Safari: https://support.apple.com/en-gb/guide/iphone/iph499d287c2/17.0/ios/17.0

, but there are plenty of others.

Outside of the timing attack and IP masking, can you expand on what else an attacker could use to track the users?

Browsers are working towards removing every bit of entropy that can be used for fingerprinting, so I'm curious if anything occurred to you that isn't being actively worked on. For example:

https://github.com/WICG/ua-client-hints#explainer-reducing-user-agent-granularity


Sam Goto <goto=40google.com@dmarc.ietf.org> Wed, May 8, 1:46 PM (19 hours ago) to Joseph, <oauth@ietf.org>


On Wed, May 8, 2024 at 1:34 PM Joseph Heenan <joseph@authlete.com> wrote: Hi Neil


On 8 May 2024, at 18:45, Neil Madden <neil.e.madden@gmail.com> wrote:


On 8 May 2024, at 17:52, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.madden@gmail.com> wrote:


In particular, the call to the accounts endpoint assumes that the IdP is willing to provide PII about the user to the browser. That seems questionable.

Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area)

Sure, but if I use the recommended auth code flow or encrypted ID tokens, then PII is not exposed to the browser. And it’s not just the browser itself in the current proposal, as the token is exposed to javascript, of course, so the usual XSS risks.

Sam’s response here is fair, but also note that as far as I understand it you can still use the authorization code flow or encrypted id tokens with the FedCM API

That's correct: the browser doesn't open the response from the IdP to the RP, so it can, for example, be encrypted.

I was assuming that Neil was referring to the fact that the id_assertion_endpoint (which contains the user's IdP's PII accounts) become, suddenly, transparent to the browser.


Joseph Heenan via ietf.org Wed, May 8, 2:01 PM (19 hours ago) to Sam, <oauth@ietf.org>


On 8 May 2024, at 21:43, Sam Goto <goto@google.com> wrote:


On Wed, May 8, 2024 at 1:34 PM Joseph Heenan <joseph@authlete.com> wrote: Hi Neil


On 8 May 2024, at 18:45, Neil Madden <neil.e.madden@gmail.com> wrote:


On 8 May 2024, at 17:52, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.madden@gmail.com> wrote:


In particular, the call to the accounts endpoint assumes that the IdP is willing to provide PII about the user to the browser. That seems questionable.

Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area)

Sure, but if I use the recommended auth code flow or encrypted ID tokens, then PII is not exposed to the browser. And it’s not just the browser itself in the current proposal, as the token is exposed to javascript, of course, so the usual XSS risks.

Sam’s response here is fair, but also note that as far as I understand it you can still use the authorization code flow or encrypted id tokens with the FedCM API

That's correct: the browser doesn't open the response from the IdP to the RP, so it can, for example, be encrypted.

I was assuming that Neil was referring to the fact that the id_assertion_endpoint (which contains the user's IdP's PII accounts) become, suddenly, transparent to the browser.

Oh yes, that’s true - but (I think) the data from the id_assertion_endpoint at least isn’t exposed to javascript and isn’t vulnerable to XSS?

Joseph


Sam Goto <goto=40google.com@dmarc.ietf.org> Wed, May 8, 2:02 PM (19 hours ago) to Joseph, <oauth@ietf.org>


On Wed, May 8, 2024 at 2:01 PM Joseph Heenan <joseph@authlete.com> wrote:


On 8 May 2024, at 21:43, Sam Goto <goto@google.com> wrote:


On Wed, May 8, 2024 at 1:34 PM Joseph Heenan <joseph@authlete.com> wrote: Hi Neil


On 8 May 2024, at 18:45, Neil Madden <neil.e.madden@gmail.com> wrote:


On 8 May 2024, at 17:52, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.madden@gmail.com> wrote:


In particular, the call to the accounts endpoint assumes that the IdP is willing to provide PII about the user to the browser. That seems questionable.

Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area)

Sure, but if I use the recommended auth code flow or encrypted ID tokens, then PII is not exposed to the browser. And it’s not just the browser itself in the current proposal, as the token is exposed to javascript, of course, so the usual XSS risks.

Sam’s response here is fair, but also note that as far as I understand it you can still use the authorization code flow or encrypted id tokens with the FedCM API

That's correct: the browser doesn't open the response from the IdP to the RP, so it can, for example, be encrypted.

I was assuming that Neil was referring to the fact that the id_assertion_endpoint (which contains the user's IdP's PII accounts) become, suddenly, transparent to the browser.

Oh yes, that’s true - but (I think) the data from the id_assertion_endpoint at least isn’t exposed to javascript and isn’t vulnerable to XSS?

That's correct.


Joseph


Tom Jones <thomasclinganjones@gmail.com> Wed, May 8, 3:09 PM (18 hours ago) to Sam, <oauth@ietf.org>

Y'all are missing another option. (re Sam's Comments) The user is given a user agent to use on their own device when accessing enterprise data. I know of two that are shipping today. https://www.getprimary.com/ Now it is possible to access other sites with these browsers as well - in fact they specifically support sites like github in such a way to get enterprise creds. Perhaps we need to test these mechanisms with those user agents? Is it helpful if i asked them for feedback - when this gets to blink they will see them then as they are chromium forks. That might cause them to change the FedCM code.

Interested in the problem of multiple IdPs. I run three different browsers now just to deal with that problem.

Relying parties accept different IdP????? that sux. I want to decide who I am based on the circumstances!!!!!!

..tom


Sam Goto Wed, May 8, 3:16 PM (18 hours ago) to me, Sam, <oauth@ietf.org>


On Wed, May 8, 2024 at 3:10 PM Tom Jones <thomasclinganjones@gmail.com> wrote: Y'all are missing another option. (re Sam's Comments) The user is given a user agent to use on their own device when accessing enterprise data. I know of two that are shipping today. https://www.getprimary.com/ Now it is possible to access other sites with these browsers as well - in fact they specifically support sites like github in such a way to get enterprise creds. Perhaps we need to test these mechanisms with those user agents? Is it helpful if i asked them for feedback - when this gets to blink they will see them then as they are chromium forks. That might cause them to change the FedCM code.

Interested in the problem of multiple IdPs. I run three different browsers now just to deal with that problem.

Relying parties accept different IdP????? that sux. I want to decide who I am based on the circumstances!!!!!!

That's quite interesting! Can you expand on this a bit more?


Neil Madden neil.e.madden@gmail.com via ietf.org Wed, May 8, 3:34 PM (18 hours ago) to Sam, oauth


On 8 May 2024, at 21:39, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 1:26 PM Neil Madden <neil.e.madden@gmail.com> wrote: Looking at these slides again, and at the spec, does this even work to defeat tracking? The browser makes two requests to the IdP prior to getting consent from the user:

1. To lookup the accounts of the user (identifying the user) 2. To lookup the metadata of the client (identifying the RP).

Isn’t it rather trivial for a tracker posing as an IDP to correlate these two requests? The privacy considerations talk about IP addresses and timing ways to correlate

The timing attack is the one that we think we are most vulnerable to at this layer, but we know how to (a) detect it and (b) address it (e.g. by introducing UX friction).

IP addresses are also a problem, but we think it will be best addressed at a different layer:

For example, in Chrome: https://developers.google.com/privacy-sandbox/protections/ip-protection

and Safari: https://support.apple.com/en-gb/guide/iphone/iph499d287c2/17.0/ios/17.0

In both cases the TLS connection is end to end, so I guess all user agents need to setup and teardown two independent connections? And make sure the IdP/tracker doesn’t encode tracking information into session resumption tickets?

As a user of the Safari method, I also know that I have to turn it off surprisingly frequently. (And some people deliberately turn it off).


, but there are plenty of others.

Outside of the timing attack and IP masking, can you expand on what else an attacker could use to track the users?

Does this assume that the tracker is trying to track a lot of people at once? Obviously, in the limit, if only a single person pings the endpoints at a certain time then it is obvious that those requests are related. How many near-simultaneous pings of a tracker do you need to ensure a sufficient level of non-correlation? For n simultaneous users the tracker needs to smuggle through log2(n) bits of entropy to be able to precisely correlate the two requests.

Another method I can think of is that the tracker responds to the request for the /config endpoint with randomised /accounts and /client-metadata endpoints, such that it can correlate the two calls to those endpoints. Maybe browsers should fetch it multiple times from different IP addresses, geographically distributed?

I’m sure I can come up with other methods.

Browsers are working towards removing every bit of entropy that can be used for fingerprinting, so I'm curious if anything occurred to you that isn't being actively worked on. For example:

https://github.com/WICG/ua-client-hints#explainer-reducing-user-agent-granularity

— Neil


Tom Jones <thomasclinganjones@gmail.com> Wed, May 8, 3:34 PM (18 hours ago) to Sam

I am under NDA and would need their permission to provide any of their isdata.

Google is now doing something similar with an (I think it's called) enterprise browser.

I can describe one use case

An enterprise puts some of their data on a third party Web site with access control. E.g. GitHub. That access to GitHub is device/UA bound. This is similar to the FIDO stuff in the back of the slide deck.

This whole area is starting to look more like DRM every week.

thx ..Tom (mobile)


Sam Goto <goto=40google.com@dmarc.ietf.org> Wed, May 8, 3:46 PM (17 hours ago) to Neil, oauth


On Wed, May 8, 2024 at 3:33 PM Neil Madden <neil.e.madden@gmail.com> wrote:

On 8 May 2024, at 21:39, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 1:26 PM Neil Madden <neil.e.madden@gmail.com> wrote: Looking at these slides again, and at the spec, does this even work to defeat tracking? The browser makes two requests to the IdP prior to getting consent from the user:

1. To lookup the accounts of the user (identifying the user) 2. To lookup the metadata of the client (identifying the RP).

Isn’t it rather trivial for a tracker posing as an IDP to correlate these two requests? The privacy considerations talk about IP addresses and timing ways to correlate

The timing attack is the one that we think we are most vulnerable to at this layer, but we know how to (a) detect it and (b) address it (e.g. by introducing UX friction).

IP addresses are also a problem, but we think it will be best addressed at a different layer:

For example, in Chrome: https://developers.google.com/privacy-sandbox/protections/ip-protection

and Safari: https://support.apple.com/en-gb/guide/iphone/iph499d287c2/17.0/ios/17.0

In both cases the TLS connection is end to end, so I guess all user agents need to setup and teardown two independent connections? And make sure the IdP/tracker doesn’t encode tracking information into session resumption tickets?

As a user of the Safari method, I also know that I have to turn it off surprisingly frequently. (And some people deliberately turn it off).


, but there are plenty of others.

Outside of the timing attack and IP masking, can you expand on what else an attacker could use to track the users?

Does this assume that the tracker is trying to track a lot of people at once? Obviously, in the limit, if only a single person pings the endpoints at a certain time then it is obvious that those requests are related. How many near-simultaneous pings of a tracker do you need to ensure a sufficient level of non-correlation? For n simultaneous users the tracker needs to smuggle through log2(n) bits of entropy to be able to precisely correlate the two requests.

Yeah, that's the kind of analysis that we are starting to do too: does it become asymptotically harder to correlate users when n simultaneous users grow?

I think I'm a lot more concerned about the n > 1B users scenario than I am with the n=1 scenario, so if we stopped the former but not the latter, it would be forward progress.


Another method I can think of is that the tracker responds to the request for the /config endpoint with randomised /accounts and /client-metadata endpoints, such that it can correlate the two calls to those endpoints. Maybe browsers should fetch it multiple times from different IP addresses, geographically distributed?

Yep, those are interesting approaches (caching comes to mind too).


I’m sure I can come up with other methods.

I'd love to work with you - and others here - to harden the system. I'm really grateful for the thought you put into this analysis so far, and exactly the reason why we figured this would be a great community to connect to: thanks!


Neil Madden neil.e.madden@gmail.com via ietf.org Wed, May 8, 3:50 PM (17 hours ago) to Joseph, oauth

On 8 May 2024, at 22:01, Joseph Heenan <joseph@authlete.com> wrote:

On 8 May 2024, at 21:43, Sam Goto <goto@google.com> wrote:


On Wed, May 8, 2024 at 1:34 PM Joseph Heenan <joseph@authlete.com> wrote: Hi Neil


On 8 May 2024, at 18:45, Neil Madden <neil.e.madden@gmail.com> wrote:


On 8 May 2024, at 17:52, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.madden@gmail.com> wrote:


In particular, the call to the accounts endpoint assumes that the IdP is willing to provide PII about the user to the browser. That seems questionable.

Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area)

Sure, but if I use the recommended auth code flow or encrypted ID tokens, then PII is not exposed to the browser. And it’s not just the browser itself in the current proposal, as the token is exposed to javascript, of course, so the usual XSS risks.

Sam’s response here is fair, but also note that as far as I understand it you can still use the authorization code flow or encrypted id tokens with the FedCM API

That's correct: the browser doesn't open the response from the IdP to the RP, so it can, for example, be encrypted.

I was assuming that Neil was referring to the fact that the id_assertion_endpoint (which contains the user's IdP's PII accounts) become, suddenly, transparent to the browser.

Oh yes, that’s true - but (I think) the data from the id_assertion_endpoint at least isn’t exposed to javascript and isn’t vulnerable to XSS?

That depends on whether the IdP correctly enforces the presence of the sec-fetch-dest header. If it doesn’t then yes, it would be vulnerable. Presumably it’s also vulnerable on older/niche browsers that don’t block sec-* headers: caniuse.com reckons > 8% of users globally are using browsers that don’t understand any sec-fetch-* headers. I’m not sure when sec-* was added to the forbidden list.

I guess, flipping this around, we might ask what is the legitimate purpose for which browsers need to access the user’s name, email address (both requires) and other identifying information? I’d have thought an identifier (possibly randomised) and some user-supplied account nickname would be sufficient.

— Neil


Sam Goto <goto=40google.com@dmarc.ietf.org> Wed, May 8, 4:07 PM (17 hours ago) to Neil, oauth


On Wed, May 8, 2024 at 3:50 PM Neil Madden <neil.e.madden@gmail.com> wrote: On 8 May 2024, at 22:01, Joseph Heenan <joseph@authlete.com> wrote:

On 8 May 2024, at 21:43, Sam Goto <goto@google.com> wrote:


On Wed, May 8, 2024 at 1:34 PM Joseph Heenan <joseph@authlete.com> wrote: Hi Neil


On 8 May 2024, at 18:45, Neil Madden <neil.e.madden@gmail.com> wrote:


On 8 May 2024, at 17:52, Sam Goto <goto@google.com> wrote:

On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.madden@gmail.com> wrote:


In particular, the call to the accounts endpoint assumes that the IdP is willing to provide PII about the user to the browser. That seems questionable.

Aside from a privacy/security threat model perspective (meaning, the user agent already has visibility over every network request made available to the content area)

Sure, but if I use the recommended auth code flow or encrypted ID tokens, then PII is not exposed to the browser. And it’s not just the browser itself in the current proposal, as the token is exposed to javascript, of course, so the usual XSS risks.

Sam’s response here is fair, but also note that as far as I understand it you can still use the authorization code flow or encrypted id tokens with the FedCM API

That's correct: the browser doesn't open the response from the IdP to the RP, so it can, for example, be encrypted.

I was assuming that Neil was referring to the fact that the id_assertion_endpoint (which contains the user's IdP's PII accounts) become, suddenly, transparent to the browser.

Oh yes, that’s true - but (I think) the data from the id_assertion_endpoint at least isn’t exposed to javascript and isn’t vulnerable to XSS?

That depends on whether the IdP correctly enforces the presence of the sec-fetch-dest header. If it doesn’t then yes, it would be vulnerable. Presumably it’s also vulnerable on older/niche browsers that don’t block sec-* headers: caniuse.com reckons > 8% of users globally are using browsers that don’t understand any sec-fetch-* headers. I’m not sure when sec-* was added to the forbidden list.

I guess, flipping this around, we might ask what is the legitimate purpose for which browsers need to access the user’s name, email address (both requires) and other identifying information? I’d have thought an identifier (possibly randomised) and some user-supplied account nickname would be sufficient.

That's easier to answer: the browser needs name/email/picture to construct an account chooser, which is the UX that tested best with users by a far margin.

Static/unpersonalized permission prompts - example in Safari, example in Chrome - perform extremely poorly (in comparison to account choosers), although have other benefits too (namely ergonomics and extensibility), so Chrome (and others) expose that too in the form of the Storage Access API.


— Neil

One attachment
 •  Scanned by Gmail

Warren Parad <wparad=40rhosys.ch@dmarc.ietf.org> 5:33 AM (4 hours ago) to Sam, oauth

I think I'm still missing something, and I'm sure it was discussed somewhere and I just didn't see it. How will this help avoid the NASCAR problem, for sites when a user signs up or when the user signs in on a new browser?


Neil Madden neil.e.madden@gmail.com via ietf.org 7:25 AM (2 hours ago) to Sam, OAuth

On 9 May 2024, at 00:06, Sam Goto <goto@google.com> wrote: [...]

I guess, flipping this around, we might ask what is the legitimate purpose for which browsers need to access the user’s name, email address (both requires) and other identifying information? I’d have thought an identifier (possibly randomised) and some user-supplied account nickname would be sufficient.

That's easier to answer: the browser needs name/email/picture to construct an account chooser, which is the UX that tested best with users by a far margin.

Static/unpersonalized permission prompts - example in Safari, example in Chrome - perform extremely poorly (in comparison to account choosers), although have other benefits too (namely ergonomics and extensibility), so Chrome (and others) expose that too in the form of the Storage Access API.


Yeah, that's what I suspected. Did you do research that specifically called out email addresses as a must-have?

PS - although this is an OAuth group, you may also want to look at things like Dropbox's Chooser/Saver widgets (https://www.dropbox.com/developers/chooser), which provide fine-grained permissions to access specific files/folders using a file dialog UX rather than a redirect-based flow. I appreciate that may not be your initial focus, but one for the "mood board" as it were...

-- Neil


Dick Hardt via ietf.org 8:06 AM (1 hour ago) to Warren, Sam, oauth

The NASCAR problem is rooted in the RP does not know which provider(s) the user has, so sites showed all the choices. FedCM only shows the provider(s) the user has.


Tom Jones <thomasclinganjones@gmail.com> 9:06 AM (33 minutes ago) to Dick, Warren, Sam, oauth

Has anyone considered what information the RP verifier should supply for FedCM to function well on the behalf of both the verifier and the user?

References