There are many unanswered questions about how technologies are being used, why they are necessary, and whether they risk infringing on human rights or privacy, write Andrew Chen and Kristiann Allen.
The idea of “emergence”, in a philosophical sense, is the notion that a system can have properties, behaviours and naturally forming rules or patterns that individual parts of the system do not have by themselves – the interactions between the components create something new. Snowflakes demonstrate this phenomenon, where individual ice crystals form and grow as they circulate in the air, leading to unpredictable but complex patterns. This perspective considers how seemingly independent parts of a system co-evolve.
Earlier this month, RNZ reported on New Zealand Police’s Review of Emergent Technologies. It included a stocktake of technologies “tested, trialled or rolled out” by NZ Police, from locating where 111 calls are coming from to facial recognition technology for finding people in CCTV footage. In total, 20 technologies were identified with a further nine “under consideration”. The review was urgently commissioned after the police commissioner was caught unaware that the High Tech Crime Unit had trialled the controversial Clearview AI product – understandably, he wanted to know if there were any other unknown surprises on the horizon.
The use of the term “emergent” in the review’s title is interesting. Police intend for it to mean “new” or “in development”, but the similarity to “emergence” also reminds us that none of these technologies exist on their own – they form part of a wider police organisation and can have interactions and synergies that are less predictable than when they are considered independently. In what way could automatic number plate recognition be combined with drones? How might a facial recognition tool currently restricted for use by the investigations team broaden in scope to other teams over time? What information might a smartphone analysis tool uncover that is incidental to an investigation? There is perhaps a missed opportunity for the review to consider the emergence of these types of interactions and combined effects (which could be both good or bad). There are also other, arguably less digital technologies and scientific tools that are, or could be, part of the system as well, such as DNA sampling, chemical analysis of substances, and/or measuring brain waves.
We can consider properly these scenarios when we have an understanding of what technologies are being used. But it is concerning that some of the technologies described in the police review had not previously been disclosed to the public. While NZ Police could argue that disclosing the use (or proposed use) of a particular technology might compromise its effectiveness against clever criminals, this argument is somewhat undermined when vendors and distributors publicly say they have NZ Police as a customer or that they offer the same product that NZ Police uses. Being transparent would go a long way towards avoiding the vacuum that will inevitably be filled with misconceptions and misinformation.
Many of the technologies in the review aren’t too alarming. For instance, the online form for non-emergency reports has keyword scanning to check if there is actually something high-priority that needs more urgent attention. It’s probably good that land and marine search and rescue can get GPS information for people who call in to say they’re lost. Most New Zealanders would agree that it’s appropriate for police to use software tools to automatically detect child abuse material so that human officers don’t have to trawl through millions of disturbing images. However, the risks and negative consequences are definitely much more severe in some areas than in others and should not go unexamined. We’ve written about the issues presented by facial recognition technologies in the past, but shouldn’t there be more scrutiny about, for instance, the fact that NZ Police executive endorsed the use of “remotely piloted aircraft systems (aka drones)” in June 2019? This particular technology is described in the review with a single sentence, yet it significantly changes the way that police can engage in surveillance activities and how that might impact the public.
Nor does this review offer much detail about process. How does a new technology make it from being a potential tool, to trial and testing, to operationalisation? Who is responsible for making the relevant decisions, and what criteria are they considering? What is the role of consulting external experts or the public, and listening to them seriously? The review says “privacy, legal, and ethical implications have been appropriately considered”, but this statement by itself doesn’t offer much confidence to people who can’t see those considerations to evaluate for themselves. What public guarantees are there that this work has been done to a high standard, and that the technology will be used in ways that actually improve public safety? Establishing a Police Technology Ethics Group or similar with a diverse range of members, including external experts, could help improve public confidence that broad impacts are being considered and that harms are mitigated. The New Zealand public service has not previously engaged much in “technology assessment” that is common Europe and the US, but recent bodies like the Data Ethics Advisory Group for StatsNZ and the Data Science Review Board for Immigration NZ provide examples for how this could work.
After the emergent technologies review was conducted but before it was released to the public, NZ Police developed some new rules requiring formal approval from one of two governance groups before trialling emergent technologies. This is a good step in that it helps us understand the approval processes and clearly identifies the group who is accountable for decisions. But we have no visibility into what trials these groups might be considering, what they have approved, or by what criteria they make their decisions. Better transparency would be helpful for improving confidence that these processes are taken seriously and that decisions are robust and of high-quality. It could also give stakeholder groups and the public an opportunity to highlight potential risks or harms that may have been overlooked by police. Collaborative processes with a diversity of inputs can be instrumental in revealing implicit biases and unrecognised risks.
Something police could consider are moratoriums on the use of more controversial technologies, or “red lines” that they will not cross. This is particularly important where technologies have been shown to have significant error rates or bias that disproportionately affects certain groups of people, and where combining that with the power of the police may be dangerous. Police repeatedly state in the review that facial recognition technologies are currently only used on stored footage in an investigative setting, not on live CCTV feeds in a real-time or frontline policing context. If NZ Police intend to maintain this policy on an ongoing basis, then it may help public confidence and understanding if they make such distinctions more obvious and pledge to not cross these lines, even if that decision is to be reviewed as the technology advances. A policy position like “police pledge not to use facial recognition on live CCTV feeds, to be reviewed every five years” might offer some confidence without unduly limiting NZ Police’s ability to consider the role new technologies might have in their toolkit as those technologies continue to improve over time. Similar provisional positions could be taken about predictive policing and online surveillance, which are high-risk and controversial topics. This approach could also help NZ Police avoid the risk of scope creep that could erode rights protections through incremental changes to policy over time.
Lastly, all of the above assumes that NZ Police are internally motivated, equipped, and enabled to change the way they approach emergent technologies and be a lot more transparent about their plans and actions. To their credit, it has been helpful for NZ Police to release this review, and there are encouraging statements about strengthening governance and oversight of emergent technologies, considering privacy and ethics, and ensuring better public and stakeholder engagement. But if the recommendations are not ultimately adopted, then the key method to realise the beneficial effects would be through government regulation.
To be sure, regulation is a powerful tool, which is used much more sparingly today than in the past, but these emergent technology challenges are not merely “operational issues” that are beneath regulatory scrutiny. Providing clarity to both NZ Police and the public about what is and isn’t acceptable use of these emergent technologies is an appropriate role for government to play. The Search and Surveillance Act 2012 needs to be updated to reflect the changes in technology that now allow for far more invasive and automated actions, rather than relying on police interpreting the intention of the Act in a way that protects the rights and freedoms of individuals. This direction is also needed for the judiciary, so they can understand how parliament wants these new technologies to be treated and where the limits on warrants are. It’s important to note that once a technology or tool is approved for use by police and operationalised, the process of stopping its use is much harder than not starting in the first place.
The review is a good start in terms of shining the light on emergent technologies that NZ Police are experimenting with and in some cases operationalising. But it also leaves us with many more unanswered questions about how these technologies are being used, why they are deemed to be necessary, what the oversight and audit mechanisms are, and whether these technologies risk infringing on human rights or defying privacy principles. The review leaves us wondering how these technologies are being incorporated into the organisation that is the police and into the process that is policing, and what new assumptions, behaviours, and patterns may emerge from the use of individual or combined technologies over time. As the review notes, the government’s principles for the safe and effective use of data and analytics say that “guidance, oversight, and transparency are essential to fostering trust, confidence, and integrity around the use of data the government holds on behalf of New Zealanders.” This trust, confidence and integrity doesn’t come automatically – it must be earned and constantly maintained by all government agencies, perhaps especially NZ Police.
Andrew Chen is a research fellow and Kristiann Allen is associate director – policy and international engagement at Koi Tū: The Centre for Informed Futures, an independent and apolitical think-tank based at The University of Auckland.
Thank you to Mark Hanna, Anne Bardsley, Jill Rolston, Nessa Lynch, and Colin Gavaghan for providing review and feedback.