Without a strong legal and ethical framework and clear policy for use, FRT can have grave implications for individual and collective rights, writes Nessa Lynch.
Automated facial recognition technology, which involves the use of an algorithm to match a facial image to one already stored in a system, is used in automated passport control and other border control measures, as a biometric identifier in the banking, security and access contexts, and on social media platforms and various other consent-based applications.
The use of FRT in policing is controversial worldwide. Unlike other biometric indicators used in policing, such as DNA and fingerprints, automated collection and matching of facial images is generally not covered by legislation. Facial images may be collected at a distance, without the person’s consent or even their knowledge. Yet, there is a spectrum of impact on individual and collective rights, from simple identity matching with a person who is arrested, up to live collection and matching of images by means of FRT-equipped camera surveillance systems. It matters also whether image matching is with existing police databases or watchlists, other state databases, private sector supplied image databases, or open source data. As my research collaborators have found, the use of live automated FRT in public places has significant implications for privacy rights as well as concerns around a chilling effect on rights to freedom of expression and lawful protest.
While we have no evidence that FRT is in current widespread use in the policing context in New Zealand, the police are known to have tendered for an updated FRT system and to have carried out a small-scale trial of a controversial system known as Clearview earlier in the year. FRT is known to be used more widely in this country by other public agencies such as Immigration NZ and the Department of Internal Affairs, and in the private sector.
A recent decision of the Court of Appeal of England and Wales illustrates the risks where the technology is trialled without having an appropriate legal and ethical framework in place.
The appellant, Mr Bridges, is a resident of Cardiff, in Wales. He was scanned by FRT, which had been (overtly) deployed by South Wales Police on a public street in Cardiff city centre, and on another occasion at a protest at a defence exhibition. The system used is named “AFR Locate” and operates by capturing facial images from a CCTV camera and automatically comparing biometric data from the images with images derived from a “watchlist”. A police camera operator may then review any matches, before making a decision on further actions or interventions.
Bridges took a case against South Wales Police alleging his right to respect for private life had been infringed and that the use of the technology breached data protection and equality legislation. A divisional court found that the right to private life was engaged but that the use of FRT was lawful and proportionate in the circumstances.
Bridges appealed to the Court of Appeal. The court held unanimously that the interference with Bridges’ privacy was unlawful because there were no clear guidelines on parameters of use, meaning that police had too wide a discretion. However, the Court did find that the use was proportionate, meaning that the impact on Bridges was minor while the benefits (presumably crime control and public safety) were significant. It was also found that South Wales Police did not undertake a proper data protection assessment and had also failed to assess whether the system could be biased.
While the court noted that automated FRT involved a much higher level of intrusion than the police taking photos or using CCTV in public places, it declined to say that specific legislative authorisation was required, unlike DNA or fingerprints. Thus, although campaigners have called for a ban on use, police forces around the United Kingdom are now refining policy and guidance on the use of the technology to take account of the decision.
As to the implications for New Zealand, police here have indicated that after the criticised trial of Clearview, they are undertaking a review of surveillance technologies to ensure privacy implications are properly considered. While adherence to the privacy regime is necessary and welcome, there remains concerns about the lack of a wider legal and regulatory framework for the use of FRT and other surveillance technologies, as well as a clear policy for use. It is worth remembering that Bridges was able to take his action on the basis of alleged breaches of rights-protecting statutes that do not have direct equivalence in this jurisdiction.
New Zealand does now have what is said to be the world’s first Algorithm Charter, which sets principles for public sector agencies using algorithms for the basis of, or to guide, decision-making. Not all agencies (including, at time of writing, the police) have signed up to the charter. This is a voluntary set of guidelines, and the means by which an individual can query improper use and seek redress is unclear. It may also be noted that the government chief data steward has convened an independent group that is available to assist public sector agencies with data ethics issues, particularly relating to algorithms. (I am a member of this group, but the views expressed here are my own.)
Finally, our research group will report our findings on lawful and ethical use of FRT later in the year. With a report from the Law Commission on the use of DNA in criminal investigations also expected, it may be an appropriate time for reflection on the wider framework for the use of biometrics in policing in the particular societal and cultural context of Aotearoa New Zealand.
Nessa Lynch is an associate professor at the Faculty of Law, Victoria University of Wellington, and is currently leading a research project on facial recognition technology