When facial recognition technology integrates with public safety, it creates controversy. Unfortunately, that was the case with the London Metropolitan Police in July 2018.
The London Policing Ethics Jury reported the lack of clarity about the legality of the technology’s use. They suggested the police to work closely with relevant commissioners to ensure proper supervision.
Almost a year later, the facial recognition technology continues to face a privacy breach.
The independent jury conducted a survey of Londoners’ perceptions of facial recognition and its use in public safety.
Over 57% say police’s use of facial recognition software was acceptable. However, that number increased to 83% when the survey asked respondents about their technology support in searching for offenders.
While half of respondents believed that using such software make them feel secure, more than 33% raised concerns about the privacy impact.
Dr. Suzanne Shale, director of the London Policing Ethics Jury, concludes on the survey’s findings:
“Given the impact that digital technology can have on public confidence with the police, ensuring the use of such software does not compromise that relationship is vital.”
After reviewing the police’s software use, the Jury published a final report that recommends meeting the following five conditions:
Besides the 5 conditions, the Jury also established a framework to support the police in testing new technologies. The framework addresses ethical concerns about how new technology will ensure the public’s protection. The structure comprises 14 questions about engagement, diversity, and inclusion regarding police considerations before proceeding with any test.
In conclusion, the Jury director said “there are important ethical issues to address, but they do not represent real reasons for not using facial recognition. We will watch closely how the use of this technology progresses to ensure that it continues to be investigated ethically.”