In the internet age, personal privacy has become a more complex topic than ever before. The University of Auckland Business School’s Gehan Gunasekara has spent his career trying to define this concept.
We find ourselves in the midst of one of history’s great turning points, something akin to the Enlightenment, the invention of the printing press, or the Industrial Revolution, during which our relationship to the world and each other is undergoing irrevocable change. “And these are not just incremental changes,” says Gehan Gunasekara, associate professor in commercial law at the University of Auckland Business School. “This is massive change that has been suddenly imposed on us.”
Gunasekara is well placed to comment on those changes: his entire academic career has run alongside the profound change that the internet has wrought on all our lives. In 1994, a shared office at the University of Auckland’s former Tamaki Campus brought him into contact with academics of diverse disciplines. Then, Gunasekara’s major interest was in more traditional commercial law, specialising in franchise law. But some colleagues from the Information Systems Department asked to pick Gunasekara’s brains about the then newly minted Privacy Act, for a research paper they were working on about the centralisation of health data. Gunasekara had never heard of the Privacy Act – signed into law the previous year – but agreed to look into it on their behalf. On doing so, he was immediately intrigued by the concept of privacy, and of how it interacted with the law. He soon began to rack up publications in the area.
As the internet steadily became more and more central to modern life, and then as social media began to intrude on our lives, he found that privacy and how to safeguard it had become a more universal concern. Gunasekara stuck with the academic niche he had found for himself, eventually helping to form the Privacy Foundation in 2017 as a civil society voice to lobby for the privacy concerns of New Zealanders.
The technological development he has witnessed over his career, he says, has steadily changed human behaviour. Take the increasing primacy of the internet, social media, and now generative AI – society changes each time it has a new technological avenue to run down, often following the technology in unforeseen directions. But, he says, people’s fundamental values regarding their privacy have remained largely the same. The question of how the law should uphold and protect those values has underpinned Gunasekara’s entire career.
Gunasekara has a favourite metaphor he uses to explain the challenge confronting lawmakers as they consider how to safeguard privacy in this fast-changing environment: the difference between Newtonian and quantum physics. Newtonian physics was able to do an adequate job of explaining the world to us for hundreds of years – but, as physicists now know, at a subatomic level those same principles no longer apply. So it is in the Internet Age. “We can still use those old privacy laws for most interactions, like interacting with your employer, interacting with government departments, your health insurance company. But then we’ve got this new environment of apps and big data.”
Traditional law-making, where a new technology is legislated for “block-by-block” after its inception, is “ultimately doomed to fail” in this new age, Gunasekara says. He points to the European Union’s Artificial Intelligence Act and its Data Act as such “reactive” laws born out of that traditional impulse. But just as we require quantum physics to understand the world at the level of its most fundamental building blocks, we need a “new legal framework” to logically deal with technology and the way it proliferates, he says. “That’s what I’m working on with my writing at the moment, I’m arguing for a comprehensive set of principles based on values that not only people can agree to, but ultimately, countries could agree to.” These principles, Gunasekara says, are based on universal values like transparency and proportionality. “The bottom line is: how can we stop being exploited? That’s my concern.”
There exists what he calls “a gap” between our legal conception of privacy and the use companies can make of our data. Currently, he says, privacy is mostly understood as the keeping of one’s identity secret. “These companies will say, ‘Look, we’re not going to give away your identity, we’re not going to disclose this information to anyone… But they can still exploit [your] data commercially in order to train their algorithms.” Many everyday user contracts are wide enough to allow that to happen. “They say they are going to do this to improve our performance or to improve our services [but] that’s a very broad term.”
Perhaps surprisingly, Gunasekara writes almost everything, from policy submissions to confidential memos, using Microsoft Word’s cloud-based transcription feature. He says he has no real concern about the safety of the data he willingly trusts Microsoft to safeguard, but he would take issue if – under those broadly defined “improvement to services” terms – he found his data was being mined in order to improve the realism of an Xbox game or some other similarly unrelated product. “That wouldn’t be, in my opinion, a fair use of my data.” What exactly would constitute fair use of that data is a problem Gunasekara finds himself increasingly grappling with. In Aotearoa New Zealand’s case, this must factor in Te Ao Māori perspectives, he says.
He says the notice and consent model – by which tech companies seek our permission to use our data via a box we click at the end of a contract we don’t read – is “completely broken”. It works well, he says, when there is a single vertical relationship between a consumer and a provider – say, between a patient and a doctor – but “now we’ve got this horizontal thing, where data is being mined every time we interact, by an unknown number of actors. We don’t really know who is syphoning off our data and what they are doing with it.”
The danger of not thinking hard about these issues, Gunasekara says, is the prospect of “death by a thousand cuts”. “We’re slowly going to get intrusions to the point where one day we will suddenly wake up and realise most of the laws we have simply haven’t protected us.” But he remains an incurable optimist; he has no doubt that just as humanity found its way through other technological upheavals, so it will weather this storm also. The way the big digital companies in certain jurisdictions have been made to pay for the news content they make available on their platforms – as in Australia’s News Media Bargaining Code, Canada’s Online News Act and, potentially, Aotearoa’s own Fair Digital News Bargaining Act currently before Parliament – gives him hope that big tech can be brought sufficiently to heel. “It might appear to people that we are in a state of complete disarray. But really, society has been through these changes before.”