Facial recognition technology: a force for good or Orwellian nightmare?
- When used properly, facial recognition software is accurate and it works
- A real-time watch-list can be updated by the police on the move, alongside facial recognition technology on different devices
- The technology can also be used in police vehicles and used with body worn cameras
At the recent BRIT awards, CCTV and mobile phone cameras were used in conjunction with a facial recognition database for scanning potential terrorist suspects on a watch list. Meanwhile, Customs and Border Protection (CPB) in the United States recently implemented the ‘biometric entry-exit system’ to identify passengers on some 16,300 flights every week. Facial recognition technology – in a nutshell Essentially, this is a software that correctly identifies faces – indeed, this technology has been developed to identify individuals out of tens of thousands of potential matches. Technology companies are now using artificial intelligence machine learning to recognise faces from huge data sets. Facial recognition and ethics Recently, amid the criticism by human rights groups about facial recognition technology, the Biometrics and Forensics Ethics Group (BFEG) commissioned a report about the trials carried out by the South Wales Police in Cardiff. This outlined a framework of ethical principles that should be taken into account when considering the deployment of live facial recognition or other automated biometric recognition technologies for policing purposes. 1. Public Interest. The use of this technology is permissible only when it is being employed in the public interest. 2. Effectiveness. The use of this technology can be justified only if it is an effective tool for identifying people. 3. The Avoidance of Bias and Algorithmic Injustice. For the use of the technology to be legitimate it should not involve or exhibit undue bias. 4. Impartiality and Deployment. If the technology is deployed for policing it must be used in an even-handed way. For example, it should not be used in ways that disproportionally target certain events, but not others, without a compelling justification. 5. Necessity. Individuals normally have rights to conduct their lives without being monitored and scrutinised. The technology should be used in ways that minimise interference with people engaging in lawful behaviour. 6. Proportionality. In addition to meeting a ‘necessity’ requirement, the technology should also meet a ‘proportionality’ requirement. That is, it can be permissible only if the benefits are proportionate to any loss of liberty and privacy. 7. Impartiality, Accountability, Oversight and the Construction of Watchlists. If humans (or algorithms) are involved in the construction of watchlists for use with the technology, it is essential that they be impartial and free from bias. 8. Public Trust. If the technology is to be used for policing it is important that those using it (either in operational deployments or trials) engage in public consultation and provide the rationale for its use. 9. Cost-effectiveness. Any evaluation of the use of this technology needs to take into account whether any resources it requires could be better used elsewhere. Conclusion No one would argue against any of the principles set down by the BFEG. Nevertheless, technologists hope that discussions will now be about the benefits of facial recognition for all members of society, including the value it can add for our safety and security. The real debate needs to be about technology and how that can continue to develop, rather than implementing legislation to limit that technology and its uses.