Identity Crisis

Image Credit: Garry Knight via Flickr

 

It’s Friday night, you’ve had a long week in the lab and your colleagues decide that it’s finally time to take off your goggles and let your hair down. There’s drink to be drunk! At the bar, one by one the bouncer allows everyone to pass…until it’s your turn. “Sorry, can I see your ID?”. Of course you don’t have it so you resign, dejected, back to your own flat, left wondering why the bouncer couldn’t see the maturity in your eyes. Soon, however, they may be able to.

Once exclusive to government agencies and camp crime dramas, facial recognition technology has become so cheap and readily available that it’s not unusual to hear of high street chains (7-11, Walmart, Amazon, et al.) and certain stadiums, shops and venues implementing these systems, each with their own motivation for doing so. Previously, security was the most commonly cited reason for utilising such technology. The concept of being able to remotely pick a dangerous face out of a crowd is hugely appealing to organisations charged with keeping us safe, from the police to national security services. More recently, however, “end-user experience” is likely to be credited. Facebook recently applied to patent a new version of this technology 1 to instantly allow retail companies access to Facebook’s personality profile of a given customer, as well as alerting the establishment to a customer’s mood and intent in order to deliver tailored customer service. While anything that prevents pushy upselling when you’re not in the mood for it can’t be all bad, many worry about the precedent this sets with respect to privacy – shouldn’t it be an individual’s choice to wear their heart on their sleeve?

So how did we get here? One major aspect of this technology is the ability to recognise “biometrics” which, in this case, refers to the positioning of facial features. This retrieved data is then cross referenced against a pre-collected “database” to determine who the particular biometric data corresponds to.

Far from being a modern breakthrough, this technology was pioneered by Woodrow Wilson Bledsoe in the 1960s 2. Bledsoe established coordinate points on facial features using simple forward-facing photographs and then compared this data to previously analysed subjects. These positionings, such as shape of jaw, distance between eyes, thickness of lips and length of nose vary enough from subject to subject that the fidelity of the comparisons met an acceptable standard. However, due to computer memory restrictions of the era and the fact that this data could not be gathered autonomously, instead requiring manual specification and selection of the relevant features, the process was slow and methodical, with much room for improvement.

Nowadays, thanks to modern machine learning algorithms, these data points can even be created on moving images. Digital cameras now have much higher resolutions and the prevalence of mobile technology and social media accounts creates enormous (normally user-generated) photographic libraries of faces that are already inherently connected to a myriad of other personal data. When combined, these factors greatly increase the potential of what could be achieved using the same basic principles discovered by Bledsoe.

These advances do have a dark side. Worryingly, there have been modern studies claiming to be able to discern criminality from the shape of a subject’s face using these algorithms, echoing the obsolete and disproven field of Victorian phrenology.3 It is vital that those developing this technology consider the potential cultural setbacks its adoption may cause when subject to the already inherent biases within those designing and implementing these systems. Many have already had uncomfortable experiences with early speech-recognition technology being unable to determine what has been said unless the user mimics a standard English or American accent and similar setbacks and bias are apparent in the facial recognition field. These problems have led some to claim that this technology has been developed by white men, for white men and is severely lacking in accuracy when it comes to the recognition of gender and other qualities in all non-white groups. On top of these issues is the very real problem ofin-built racism 5 6

Private sector implementation of this technology has already garnered controversy. A recent example is patrons feeling targeted at Madison Square Garden after it was discovered that they’d been scanning all attendees to run their faces against an in-house security database and collect data on their age, genders and ethnicities for market research purposes. 7 More worrying still, certain governments like China’s are  being criticized for ethnically discriminating against entire communities by warning those who aren’t recognised on CCTV as fitting a certain description that they’re “in the wrong neighbourhood”. 8

Such is the pushback against having individual personal information readily available to any business with a camera, certain subcultures have began developing means to confuse or otherwise spoof these algorithms. Introducing many fake faces elsewhere on their person, obscuring their features with specially designed make-up orfacewear act to create false data points to incite false positives and outright errors. 9 10 11  Some simply stick to the tried and tested method of using a mask. However, even these need to be specially designed as surveillance systems are now capable of seeing past face coverings over 50% of the time  12 13.

Undoubtedly, the implementation of these new technologies could bring welcome advantages to our day to day lives in the form of major convenience, saving us from fishing out tickets and IDs whilst passing through security gates or allowing us to pay for our items in the supermarket without the need for interaction. However, having your entire online identity available to the highest bidder and your location potentially tracked by mysterious third parties opens us up to exploitation and an increasing number of potential civil issues. With this kind of data collection becoming more commonplace it is imperative that we all become more conscious of the potential impacts such technology will have.

Of course you could choose to ignore it. But you will be seen doing so.

This article was specialist edited by Gabriela De Sousa and copy edited by Kirsten Woollcott.

Author

References

  1. http://www.freshpatents.com/-dt20171123ptan20170337602.php
  2. https://archive.org/details/firstfacialrecognitionresearch)
  3. https://arxiv.org/abs/1611.04135
  4. that sadly occurs when a machine learns from interactions in online communities. 4 https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html
  5. https://bits.blogs.nytimes.com/2015/07/01/google-photos-mistakenly-labels-black-people-gorillas /
  6. https://www.nytimes.com/2018/03/13/sports/facial-recognition-madison-square-garden.html?smid=tw-nytsports&smtyp=cur
  7. https://www.bloomberg.com/news/articles/2018-01-17/china-said-to-test-facial-recognition-fence-in-muslim-heavy-area
  8. https://ahprojects.com/projects/hyperface/
  9. http://www.ewdn.com/2017/07/24/top-yandex-engineers-develops-anti-facial-recognition-techology/
  10. https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf
  11. http://www.urmesurveillance.com/urme-prosthetic/
  12. https://www.newscientist.com/article/2146703-even-a-mask-wont-hide-you-from-the-latest-face-recognition-tech/

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.