Job loses caused by automation may grab the bulk of the headlines, but more of us may be affected by changes to recruitment and worker surveillance, writes Colin Gavaghan, director of the Centre for Law and Policy in Emerging Technologies
Until recently, a question such as that in the headline has led immediately to discussions about how many jobs will be lost to the technological revolution of artificial intelligence. Over the past few years, though, more of us have started looking at some other aspects of this question. Such as: for those of us still in work, how will things change? What will it be like to work alongside, or under, AI and robots? Or to have decisions about whether we’re hired, fired or promoted made by algorithms?
Those are some of the questions our multi-disciplinary team at Otago University, funded by the New Zealand Law Foundation, have been trying to answer. Last week, we set out our findings in a new report.
There’s a danger of getting a bit too Black Mirror about these sorts of things, of always seeing the most dystopian possibilities from any new technology. That’s a trap we’ve tried hard to avoid, because there really are potential benefits in the use of this sort of technology. For one thing, it’s possible that AI and robots could make some workplaces safer. ACC recently invested in Kiwi robotics company Robotics Plus, for example, whose products are intended “to reduce the risk of accidents at ports, forestry sites and sawmills”.
Of course, workplace automation can also increase danger. We’ve already seen examples of workplace robots causing fatalities. One of our suggestions is that New Zealand’s work safety rules need to catch up with the sort of robots we’re likely to be working alongside in the future – fencing them off from human workers and providing an emergency off-switch isn’t going to be the answer for “cobots” that are designed to work with and around us.
Physical injuries from robots may present the most visceral image of the risks of workplace automation. Luckily, they’re probably likely to be fairly rare. Far more people, we think, will be affected by “algorithmic management” – the growing range of techniques used to allocate shifts, direct workers and monitor performance.
As with workplace robots, there’s potential here for the technology to improve things for workers. One report talked about how it could “benefit workers by giving clearer advance notice of when shifts will be and making it easier to swap and change them”. There’s no guarantee, though, that algorithmic management tools will be used to benefit workers. Our earlier warning aside, it’s hard not to feel just a bit Black Mirror when seeing images of Amazon warehouses where workers are micro-managed to an extent beyond the wildest dreams of Ford or Taylor.
A particular concern that’s grown during the Covid crisis is the apparently increasing prevalence of workplace surveillance. While by no means a new phenomenon, AI technologies could offer employers the opportunity to monitor their workers more closely and ubiquitously than ever before.
Of course, not all employers will treat their workers like drones. But workplace protection rules don’t exist for the good employers. If we want to avoid the soul-crushing erosion of privacy, autonomy and dignity that could accompany the worst abuses of this technology, we think those rules will need to be tightened in various ways.
Concerns about AI in the workplace don’t start with algorithmic management, though. A lot of them start before the employment relationship even begins. Increasingly, AI technology is being used in recruitment: from targeted job adverts, to shortlisting of applicants, even to the interview stage, where companies like Hirevue provide algorithms to analyse recorded interviews with candidates.
The use of algorithms in hiring poses a serious risk of reinforcing bias, or of rendering existing bias less visible. Most obviously, there’s a risk that algorithms will base their profiles of a “good fit’ for a particular role on the history of people who’ve occupied that role before. If those people happen to have been overwhelming white, male and middle class … well, it’s not hard to guess how that will probably go. Also, affective recognition software that’s been trained on mostly white, neurotypical people could make unfair adverse judgments about people who don’t fit into those categories, even if they score highly in the sorts of attributes that really matter. (Hirevue recently stopped using visual analysis for their assessment models, but since these sorts of platforms will obviously have to rely on inferences from something – maybe voice inflection or word choices – questions about cultural, class or neurodiversity awareness remain.)
But doesn’t New Zealand already have laws protecting us against workplace hazards, privacy violations and discrimination? It does indeed. Like almost every other new technology, workplace AI isn’t emerging into a legal vacuum. Unfortunately, some of those laws were designed for a different time, which can lead to what tech lawyers call “regulatory disconnection” when there’s a major change to the technology’s form or use. For instance, the current rules around workplace robots seem to assume that they can be fenced off from human workers, whereas the “cobots” that are now coming into use will be working in close proximity to humans.
In other cases, the law seems fine, but the problem is spotting when the technology violates it. Our Human Rights Act prevents discrimination on a whole bunch of grounds, including sex, race and disability, but that won’t be much help to someone who has no way of knowing why the algorithm has declined them. It may even be that employers themselves won’t know who has been screened out at an early stage, or on what grounds.
As we argue, though, it doesn’t have to be that way. Just as workplace robots could reduce injuries and fatalities, so could algorithmic auditing software help to detect and reduce bias in recruitment, promotion, etc. It’s not as though humans are perfect at this! Maybe AI could make things better. What we can’t do, though, is complacently assume that it will do so.
In April, the EU Commission published a draft law for Europe which would require scrutiny and transparency for certain uses of AI technology. That would include a range of functions related to employment, such as recruitment, promotion and termination; task allocation; and monitoring and evaluating performance. Last year, New York City Council introduced a bill that would provide for algorithmic hiring tools be audited for bias, and their use disclosed to candidates.
Our report calls for New Zealand to take the same kinds of steps. For instance, we propose that consideration should be given to following New York’s example, and requiring manufacturers of hiring tools to ensure those tools include functionality for bias auditing, so that client companies can readily perform the relevant audits.
“Algorithmic impact assessments” looking at matters like privacy and worker safety and wellbeing should be conducted before any algorithm is used in high stakes contexts. And we’ve suggested that there should be important roles for the Office of the Privacy Commissioner and Worksafe NZ in overseeing surveillance technologies.
We think these steps would go some way to ensuring that New Zealand businesses and workers (actual and prospective) could enjoy the benefits of these technologies, while being protected from the worst of the risks.
Our report isn’t a prediction about what the future workplace will look like when AI and robots are a regular part of it. How things turn out depends substantially on the sorts of choices we make about how to use these technologies. And we’re not proposing that we need fear the future or rage against the machines. But we do think we should be keeping a close watchful eye on them. Because you can bet they’ll be keeping an eye on us.