Facial recognition is everywhere
There has been a lot of press about facial recognition. Advances in AI and a decline in hardware cost, have made this technology ubiquitous. It has been deployed at sporting events, airports and everywhere in between.

In China, it is everywhere. China uses facial recognition to identify jaywalkers. If you are caught jaywalking, your image is displayed on a large screen and you could get a fine via text message. At certain stores you can even "pay with your face" and have a computer scan your image, which then deducts money from your Alipay account.

What's the deal?
The explosion in use is a result of facial recognition systems becoming near perfect. The National Institute of Standards (NIST) runs a yearly competition to see who can develop the most accurate algorithm and the top performers have 99%+ accuracy. Microsoft, one of the leaders in the competition, is able to identify the correct individual, in a database of 12.5mn pictures, 99.5% of the time. Chinese startups have also dominated the competition, achieving 99%+ accuracy and winning international contests for having the most accurate solution.

But has issues
How this technology performs in the real world is alarming. Despite claiming a similar accuracy for its facial recognition, real world tests in London have been abysmal. Over the past two years, London has run tests of its technology and recently released data show how bad it is.

Show me the data - link to download

In the two plus years the system has been deployed, the computer successfully identified 294 criminals, but identified 2,796 innocent people as criminals - far from perfect. In machine learning speak, the systems 'precision', or how well it does with handling false positives is an awful 0.095.
However, not all of those innocent people are confronted by the police. If someone is tagged by the system, a person reviews the image for a possible match. After reviewing each person flagged, there were only 51 people wrongly identified by the system and then confronted by the police. If we use this as our false positive number, the system jumps to a precision of 0.85, much better then the original.

How could the system be so wrong?
While we don't know exactly how the London Metro developed their algorithm, the accurate numbers they quote probably come from very controlled lab experiments. While they may be getting accurate results with millions of images, considering 30mn people visit London a year (including myself in late October and let me know if you want to meet!), whom the algorithm has never seen before, there is probably going to be some confusion.

This has a couple of implications going forward
First, some countries are becoming increasingly divided on the use of this technology. San Francisco recently banned the technology and the European Commission is also looking to place strict rules on the technology. As the technology becomes more prevalent, I expect more privacy debates will emerge.

Second, people are developing strategies to get around these systems. Researchers in Russia created a sticker to be placed on your head that prevents algorithms from identifying you. Protesters in Hong Kong were pointing lasers at facial recognition cameras so they couldn't be identified. In London, some people put their hoods up to avoid being on camera.

Lastly - if your not a criminal, carry a lot of identification cards. The London police said no one was arrested as a result of a false match, because an ID card would sort that out.

Just another example of low tech saving the day.