Face recognition is not ready for use and may never will be

Benjamin Franklin and CCTV

Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. – Benjamin Franklin for the Pennsylvania Assembly in its “Reply to the Governor” (11 Nov. 1755).

Let’s start with a little bit of history…

CCTV on London streets was installed and used to prevent the crimes from happening (and to solve them). Let’s hear from Detective Chief Inspector Mick Neville – “CCTV was originally seen as a preventative measure,” Neville told the Security Document World Conference in London. “Billions of pounds has been spent on kit, but no thought has gone into how the police are going to use the images and how they will be used in court. It’s been an utter fiasco: only 3% of crimes were solved by CCTV.”

This comes from 2008 article published by the Guardian, the same paper reported a year later: „The use of closed-circuit television in city and town centers and public housing estates does not have a significant effect on crime, according to Home Office-funded research.”

See also some data compiled by ACLU.

Certainly things improved over a decade with new and fancy face recognition tech, right?

Let’s go back to the Guardian: „Police are facing calls to halt the use of facial recognition software to search for suspected criminals in public after independent analysis found matches were only correct in a fifth of cases…”

Or to be precise got it wrong 81% of the time.

Yet, just a week ago the high court in Cardiff has ruled that „Police use of automatic facial recognition technology to search for people in crowds is lawful.”

It baffles and scares me. One thing is poor results from CCTV crime prevention, other use of AFR (automated face recognition) technology with dubious results as legal and valid.

Let’s do short history lesson about image recognition.

2001 – Viola–Jones object detection framework provides competitive object detection rates in real-time.
The same year city of Tampa,Florida used a facial recognition system as it hosted Super Bowl.
2005 – Navneet Dalal and Bill Triggs published Histograms of Oriented Gradients about pedestrians detection in still images.
2009 – ImageNet database was published – 14 million annotated images, containing  more than 20,000 categories.
2012 – Krizhevsky, Sutskever and Hinton showed object recognition algorithm ensuring an 85% accuracy.
2012 – Building High-level Features Using Large Scale Unsupervised Learning  proved that computer can pick features from unlabelled data (in this case pictures of cats from Youtube stills). 
2014 Fei-Fei and Karpathy published Deep Visual-Semantic Alignments for Generating Image Descriptions a model that generates natural language descriptions of images.  

Evolution of neural networks and image recognition algorithms made steady progress to the point where Facebook algorithm would identify partly covered face with 83% accuracy in 2015. Face recognition is used at airports to go through automated checkpoint, to unlock our phones and object recognition is fast enough to allow autonomous cars (and weapons).

It was about that time (2015) when Google new Photos app labelled some dark skinned people as gorillas.

Pertaining questions are about bias and privacy violations. How well researched and regulated AFR is to encroach on important part of our lives. As more training datasets are being produced, images are usually scraped from the mugshots or the internet without consent. The potential problems could come from governments and companies using this technology to target, analyze and build systems based on biased datasets, often developed in secrecy.

NYC Mobile surveillance van

One example is secret collaboration between IBM and NYPD (started in 2012) to target people by skin color.  The secrecy of those programs is clearly abuse of public trust. The possibility of sensitive data being sold to 3rd party or just hacked is also strong possibility. Commercial face recognition software, has repeatedly been shown to be less accurate on people with darker skin. Meanwhile, a predictive policing algorithm called PredPol was shown to unfairly target certain neighborhoods. And in a truly disturbing case, the COMPAS algorithm, which predicts the likelihood of recidivism to guide sentencing, was found to be racially biased.

PredPol – predictive policing company, looks a lot like idea from sci-fi movie „Minority Report’.(I’m going to write more about it later). Incorporated in 2012, PredPol software has been used in dozens of US and British cities according to the documents obtained by Motherboard.
They, by the way perform daily backups which are kept indefinitely on their servers.

As experts say“There has been a lack of objective science about efficiency and effectiveness of predictive policing,” Andrew Ferguson, a professor of law at the University of the District of Columbia School of Law.
Also, as pointed out by Shahid Buttar “If you overpolice certain communities, and only detect  crime within those communities, and then try to provide a heat map of  predictions, any AI will predict that crimes will occur in the places  that they’ve happened before.”
This kind of feedback loop is not something I would call fair policing. It would also be applicable to the mugshots uploaded to the database, where they are likely to reappear in the police search for suspects.

Going back to CCTV, lets see report about it, published in August 2019 : Comparitech researchers collated a number of data resources and reports, including government reports and police websites, to get some idea of the number of CCTV cameras in use in 120 major cities across the globe. We focused primarily on public CCTV—cameras used by government entities such as law enforcement.
Here are our key findings:

  • Eight out of the top 10 most surveilled cities are in China
  • London and Atlanta were the only cities outside of China to make the top 10

A primary argument in favor of CCTV surveillance is improved law enforcement and crime prevention. We compared the number of public CCTV cameras with the crime and safety indices reported by Numbeo, which are based on surveys of that site’s visitors.
For both indices, the correlation was weak (r = 0.168, r = -0.168, n = 120). A higher number of cameras just barely correlates with a higher safety index and lower crime index.”
And I thought for a minute that maybe it was poor video quality. CCTV didn’t deter rioters in London, as it didn’t the thief who stole my bike in front of the camera.

OTHER CASES

Another area of recognition tech is detection of emotions. One of the companies behind this kind of applications is Affectiva. It claims it has dataset of  7.5m faces from 87 countries, most of it collected from opt-in recordings of people watching TV or driving their daily commute. Other company: Faception offers predictions for how likely someone is to become a terrorist or a paedophile. Amazon’s Rekognition update is going to improve face analysis to include emotions.
Again, face emotion analysis is prone to bias. Let’s just think of potential problems: In Japan there is a smile, that does not indicate happiness, but politeness. Peoples face emotions can be taken out of the context etc.

Popular and quite ubiquitous is another of our „fingerprints” – voice. Israeli company “Voicesense” uses real-time analysis to evaluate user (for loans, price discrimination etc.). Others like CallMiner, RankMiner have similar business approach. This growing market – expected to reach $15.5bn by 2029.

Boom started with voice assistants and promise to control the gadgets with our voice. Then we had „couple” of snafus. Samsung smart TV was recording people in their living room, Amazon’s Alexa recorded someone private conversation and by mistake sent it to other user. Employees of several companies were found to listen to your calls and what you say to Alexa.

We do know that voices are recorded by voice assistants for “improvements” and stored on the servers. Our voice is highly personal and unique – the timbre, speed, the pauses we make. Usual problems may arise with bias – you can tell race and ethnic origin from the voice. It’s a biometric data but also something more. The moment your voice has been profiled, there is strong possibility to also estimate the changes in your health and behaviour. And you cannot change your voice…

There are many good possible applications like SimSensei -which tries to detect depression. The corporations are trying to get their voice assistants in hospitals.  Startups like Sonde Health and Ellipsis Health,   record audio diaries using the app which analyses those along with call logs and  location to determine how the patient feels and tracks changes. The risk of privacy violations, misuse by governments and corporations seems to outweigh the benefits until we can safeguard and protect our most intimate data somehow.

SUMMARY

As of now, the face recognition technology is too immature, unreliable, not properly regulated and a threat to our rights and our well-being (who wants to feel watched and assessed all the time, with the strong possibility of misjudgment?). Taking our “fingerprints” like faces or voice and using them without consent is outrageous and dangerous, as it can be used in the future for identity fraud if the databases are hacked.

There are troubling cases cropping up. Amazon helps police with ring doorbells

Homeland Security wants to use facial recognition on 97 percent of departing air passengers by 2023 thehill.com
And as you can see in this twitter thread it’s already happening.

Fortunately there has been some backlash against it. San Francisco banned facial recognition in May 2019. Stop Secret Surveillance” ordinance, will prevent adopting any surveillance tech including automatic license plate readers. Oakland and Somerville also banned AFR.

Musician joined campaigning against the use of facial recognition technology at concerts. (It was used at Taylor Swift concert in 2018). Good resource for information about developments.

In the US there is a petition website you can use https://www.banfacialrecognition.com/ and their brilliant map which shows where facial recognition surveillance is happening.
Similar map for UK – https://facialrecognitionmap.com/

In July, Microsoft president wrote in a blog post:

“We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology,”

We all should.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment