Difference between revisions of "Facial Recognition"

From MgmtWiki
Jump to: navigation, search
(Context)
m (Context)
Line 3: Line 3:
 
==Context==
 
==Context==
  
Facial recognition finds itself in challenging position.. When used well: localized strong authentication in a Secure Enclave on a secure device, for instance, it can increase security and open access to underserved markets (e.g. loans and health insurance extended to populations in West Africa.)
+
Facial recognition finds itself in challenging position. When used well: localized strong authentication in a [[Secure Enclave]] on a secure device, for instance, it can increase security and open access to underserved markets (e.g. loans and health insurance extended to populations in West Africa.)
  
 
No technology is without its issues, however, and facial recognition soon found similar issues to that of the Nelson Chequer: accuracy suffered as research was conducted showing issues with racial and gender bias, and it has been reported over the past year that several companies such as Clearview.AI are building facial recognition repositories by scraping photographs from social media sites. These issues with accuracy and misuse (the negative expression of ease of use) are weaponized as facial recognition is used by government and law enforcement. Clearly, action is needed to establish controls and protections for this biometric. (This past summer, for example, the ACM called for a pause in the use of facial recognition to ensure its ethical use.) This ethical enforcement may be long in coming—enter adversarial research.
 
No technology is without its issues, however, and facial recognition soon found similar issues to that of the Nelson Chequer: accuracy suffered as research was conducted showing issues with racial and gender bias, and it has been reported over the past year that several companies such as Clearview.AI are building facial recognition repositories by scraping photographs from social media sites. These issues with accuracy and misuse (the negative expression of ease of use) are weaponized as facial recognition is used by government and law enforcement. Clearly, action is needed to establish controls and protections for this biometric. (This past summer, for example, the ACM called for a pause in the use of facial recognition to ensure its ethical use.) This ethical enforcement may be long in coming—enter adversarial research.
  
Adversarial research has most often sought to discover and address issues with the accuracy of technology, and facial recognition is no exception. More recently, however, there has been a shift to provide resources and tools to individual users so that they can protect their own data. This often takes the form of protections added to photos that are subsequently uploaded to social media (Instagram / facebook / twitter / etc). After all, who’s more interested in protecting their own privacy than themselves? The goal of these recent efforts is to allow them to participate (in parallel with enterprises) in the protection of their own biometric privacy.
+
Adversarial research has most often sought to discover and address issues with the accuracy of technology, and facial recognition is no exception. More recently, however, there has been a shift to provide resources and tools to individual users so that they can protect their own data. This often takes the form of protections added to photos that are subsequently uploaded to social media (Instagram / Facebook / twitter / etc). After all, who’s more interested in protecting their own privacy than themselves? The goal of these recent efforts is to allow them to participate (in parallel with enterprises) in the protection of their own biometric privacy.
  
 
===Papers of Interest===
 
===Papers of Interest===
Line 17: Line 17:
 
This works decently well, with the most significant protection provided against Microsoft’s Azure Face API. (Note that the authors had to re-release their tool in April 2021 because Azure had somehow improved its defense against Fawkes.). This method works well, but is processor and time consuming—for default protection, it takes about 8 minutes on my MacBook Pro from 2018 for the “medium” protection level, and a whopping 22 minutes for a single photo at the maximum protection level.)
 
This works decently well, with the most significant protection provided against Microsoft’s Azure Face API. (Note that the authors had to re-release their tool in April 2021 because Azure had somehow improved its defense against Fawkes.). This method works well, but is processor and time consuming—for default protection, it takes about 8 minutes on my MacBook Pro from 2018 for the “medium” protection level, and a whopping 22 minutes for a single photo at the maximum protection level.)
  
In January of 2021, “Lowkey” was released by researchers from the University of Maryland and the US Naval Academy. It used a similar method of polluting the users photos by shifting its feature set, but LowKey’s algorithm attempts to disrupt facial detection and classification—the entire facial recognition pipeline, if you will. It uses a different algorithm to minimize loss: Learned Perceptual Image Patch Similarity (LPIPS). It is not open-source, but requires users to upload their photos to a hosted web service and get the modified copies back via email. The authors report that speed is accelerated, however.
+
In January of 2021, “Lowkey” was released by researchers from the University of Maryland and the US Naval Academy. It used a similar method of polluting the users' photos by shifting its feature set, but LowKey’s algorithm attempts to disrupt facial detection and classification—the entire facial recognition pipeline, if you will. It uses a different algorithm to minimize loss: Learned Perceptual Image Patch Similarity (LPIPS). It is not open-source, but requires users to upload their photos to a hosted web service and get the modified copies back via email. The authors report that speed is accelerated, however.
  
 
It is important to note that these three methods: Camera Adversaria, Fawkes, and LowKey all provide dynamic protection for the end users’ photos—meaning that the protection is generated uniquely for each image. Current machine learning models restrict learning on the device itself due to processing power and other issues, ruling out the use of Fawkes and LowKey until further advancements on the mobile platform. Thus, only Camera Adversaria is “in the flow” — meaning that it is fast/cheap enough to be part of the photographic process on the mobile device itself. This is essential for ease of use, of course: most users will not be incentivized enough to protect their own photos if that protection is too cumbersome or not “built-in.” Even then, security culture takes time to adopt—think of the adoption curve seen with seatbelts in transportation.
 
It is important to note that these three methods: Camera Adversaria, Fawkes, and LowKey all provide dynamic protection for the end users’ photos—meaning that the protection is generated uniquely for each image. Current machine learning models restrict learning on the device itself due to processing power and other issues, ruling out the use of Fawkes and LowKey until further advancements on the mobile platform. Thus, only Camera Adversaria is “in the flow” — meaning that it is fast/cheap enough to be part of the photographic process on the mobile device itself. This is essential for ease of use, of course: most users will not be incentivized enough to protect their own photos if that protection is too cumbersome or not “built-in.” Even then, security culture takes time to adopt—think of the adoption curve seen with seatbelts in transportation.

Revision as of 18:14, 20 September 2022

Full Title or Meme

Context

Facial recognition finds itself in challenging position. When used well: localized strong authentication in a Secure Enclave on a secure device, for instance, it can increase security and open access to underserved markets (e.g. loans and health insurance extended to populations in West Africa.)

No technology is without its issues, however, and facial recognition soon found similar issues to that of the Nelson Chequer: accuracy suffered as research was conducted showing issues with racial and gender bias, and it has been reported over the past year that several companies such as Clearview.AI are building facial recognition repositories by scraping photographs from social media sites. These issues with accuracy and misuse (the negative expression of ease of use) are weaponized as facial recognition is used by government and law enforcement. Clearly, action is needed to establish controls and protections for this biometric. (This past summer, for example, the ACM called for a pause in the use of facial recognition to ensure its ethical use.) This ethical enforcement may be long in coming—enter adversarial research.

Adversarial research has most often sought to discover and address issues with the accuracy of technology, and facial recognition is no exception. More recently, however, there has been a shift to provide resources and tools to individual users so that they can protect their own data. This often takes the form of protections added to photos that are subsequently uploaded to social media (Instagram / Facebook / twitter / etc). After all, who’s more interested in protecting their own privacy than themselves? The goal of these recent efforts is to allow them to participate (in parallel with enterprises) in the protection of their own biometric privacy.

Papers of Interest

In July of 2020, Camera Adversaria was released by a research group from Australia. It sought to cause classification of objects within photographs to be corrupted by injecting “perlin/simplex noise” into the photo. The structures that it forms span the entire photograph and are organic in nature —in fact, they are often used to generate landscape or terrain for digital environments. While not specifically designed to disrupt facial recognition, it does appear to have an impact on facial detection. (In my limited testing, it caused Google’s MLKit to classify one of my own selfie’s as a “leatherback turtle.” I’ll leave it to the reader to decide if that is an upgrade or a downgrade.)

In August of 2020, a research team with the University of Chicago released the open-source “Fawkes” tool. It extracts the feature set from an identified face and then “cloaks” it with a feature set from a different face. It seeks to maximize the “spatial difference” between the two sets while still limiting the difference in human perception (using the Structural Dis-Similarity Index / DSSIM to optimize for loss). Once enough modified photos are released into the wild, commercial facial recognition systems (Microsoft, Amazon, and Face++ are the big three) start misclassifying the original face.

This works decently well, with the most significant protection provided against Microsoft’s Azure Face API. (Note that the authors had to re-release their tool in April 2021 because Azure had somehow improved its defense against Fawkes.). This method works well, but is processor and time consuming—for default protection, it takes about 8 minutes on my MacBook Pro from 2018 for the “medium” protection level, and a whopping 22 minutes for a single photo at the maximum protection level.)

In January of 2021, “Lowkey” was released by researchers from the University of Maryland and the US Naval Academy. It used a similar method of polluting the users' photos by shifting its feature set, but LowKey’s algorithm attempts to disrupt facial detection and classification—the entire facial recognition pipeline, if you will. It uses a different algorithm to minimize loss: Learned Perceptual Image Patch Similarity (LPIPS). It is not open-source, but requires users to upload their photos to a hosted web service and get the modified copies back via email. The authors report that speed is accelerated, however.

It is important to note that these three methods: Camera Adversaria, Fawkes, and LowKey all provide dynamic protection for the end users’ photos—meaning that the protection is generated uniquely for each image. Current machine learning models restrict learning on the device itself due to processing power and other issues, ruling out the use of Fawkes and LowKey until further advancements on the mobile platform. Thus, only Camera Adversaria is “in the flow” — meaning that it is fast/cheap enough to be part of the photographic process on the mobile device itself. This is essential for ease of use, of course: most users will not be incentivized enough to protect their own photos if that protection is too cumbersome or not “built-in.” Even then, security culture takes time to adopt—think of the adoption curve seen with seatbelts in transportation.

These adversarial approaches are, of course, largely academic at this point, but incorporating some of these techniques on the enterprise side of the house is possible. Much like hashing important or sensitive data, applying protections to photographic content may be a helpful add-on to enhancing privacy.

Some of the above is from Mike Kiser @ SailPoint

References