spinofflive
Zeffer Cider used $2m of investment to relocate to Hawke’s Bay, creating jobs. (Photo: Zeffer)
Zeffer Cider used $2m of investment to relocate to Hawke’s Bay, creating jobs. (Photo: Zeffer)

BusinessMarch 28, 2019

How Kiwis’ preference for property is starving our startups

Zeffer Cider used $2m of investment to relocate to Hawke’s Bay, creating jobs. (Photo: Zeffer)
Zeffer Cider used $2m of investment to relocate to Hawke’s Bay, creating jobs. (Photo: Zeffer)

Allowing overseas money to pour in and fill the gap left by a dearth of New Zealand investors is robbing us of employment opportunities and valuable tax revenue, Bill O’Boyle writes.

Anyone who’s spent time around the Kiwi startup scene knows that one of the biggest challenges facing our fledgling businesses is the dreaded ‘funding gap’, sometimes cheerfully referred to as ‘the valley of death’.

The scenario is this: New Zealand now has a strong ecosystem of angel investors and government grants that support promising entrepreneurs, but we are falling over at the next significant hurdle. Startups are risky by nature, and when the best (or luckiest) of these businesses survive and need further funding to become world-beating companies they face a dearth of specialised later stage investors willing to take them to the next level.

We need these young companies to expand because they create high-paying jobs and contribute significantly to the country’s tax revenue (unlike our friends Apple, Google and Facebook).

A partial cause of this conundrum is New Zealanders’ obsession with property and term deposits. Money gets invested into houses and banks rather than businesses that can grow our economy, and this preference leads to underdeveloped capital markets. It opens the way for large amounts of overseas money to flow in and snap up homegrown Kiwi enterprises, resulting in a steady procession of companies leaving the New Zealand stock exchange. Recent examples have included Tegel, Diligent and Trade Me.

The point at which our current environment falls down is when a company is doing around $1 million to $2m in annual revenue, and is looking for investment of between $1m to $10m to take things to the next phase. This is broadly known as the venture capital (VC) stage of investment, where businesses have a product to sell and are looking to grow sales, spend more on marketing and further develop their offering. These companies face limited New Zealand funding options despite having done the hard yards of proving they have a decent chance of making it big.  

That’s not to say there aren’t investors who operate in the venture capital space in New Zealand. Depending on your definition of what stage constitutes venture there are a couple of players such as Punakaiki Fund, GD1 and Movac. These VCs pool capital from diverse groups such as wealthy individuals, iwi and the government, and invest it on their behalf. The funds referred to above total $180m and they would expect to invest the money over five to seven years (so roughly $30m a year of investment).

By most industry estimates this amount is not enough to truly move the needle. At the New Zealand Angel Association conference in Blenheim last year it was estimated that the funding gap is over $240m annually.

Punakaiki Fund founder Lance Wiggs, who also advised companies as part of New Zealand Trade and Enterprise’s ‘Better by Capital’ programme for many years, says he’s seen well over a hundred businesses with viable prospects struggle in this environment. They are often forced to economise in the short term at the expense of growth. This means New Zealand loses the opportunity for these companies to truly capitalise on the market opportunity in front of them and possibly become global successes.

Not everyone in the industry agrees. Rowan Simpson, a former early employee and investor in Wellington success stories Trade Me and Xero, is sceptical that there really is a shortage of capital in Aotearoa. Good companies with strong growth prospects that are executing well have always been able to raise the money they need, he argues.

He is right, in that some of the most promising ventures in recent times have been able to raise international money, and this reduces the need for any New Zealand-specific investment in this area. However it doesn’t necessarily mean all companies who could be a success have been appropriately funded. This also holds back companies that might be important in a New Zealand context but do not excite an international audience (this is particularly true for non-tech companies).

The other problem with being funded this way is that American investors eventually want the company to register as a corporation in the US to focus on growth there, thus reducing local job opportunities and tax revenue and defeating the purpose of building these successful Kiwi companies in the first place. Case-in-point is the furore over whether Rocket Lab really is a New Zealand company, when it is now registered in the US and the actual return to the government on what was spent in the venture’s early stages is questionable.   

There have been a few moves to try and address the situation. The government set up the New Zealand Venture Investment Fund (NZVIF) in 2002, but hasn’t increased its funding significantly since. The Labour government has introduced a tax refund on research and development costs, which is a good start, but in the early stages startups don’t pay much tax (if any) as they generally lose money for their first few years of operation.

There is probably no one magic bullet solution. It will take a combination of all factors, from government incentives and a wider range of funds in the venture space through to bigger cheques being written, and of course time to build a sustainable solution.

Investing in private companies creates immediate jobs and investment back into the local economy. This goes hand-in-hand with the potential financial returns to investors.

When cidery Zeffer Cider raised $2m in 2017 it used the funds to relocate its operation from Auckland to Hawkes Bay, sub-leasing a local farm, implementing a new manufacturing fit-out, opening a taproom and in the process creating numerous regional jobs. On a larger scale, think about the role Trade Me and Xero played in the growth of Wellington’s CBD, from large scale property development to significant expansion of the employment market.

The funding gap should be seen as an opportunity. With the right government incentives and allocation of more private funds to the space, investing in Kiwi businesses at this stage in their development will deliver benefits to all New Zealanders.

Bill O’Boyle is Director of Private Capital at Snowball Effect, New Zealand’s leading online investment marketplace. 

Keep going!
Unregulated algorithms are manipulating societies, Primer AI founder Sean Gourley says. (Photo: Supplied.)
Unregulated algorithms are manipulating societies, Primer AI founder Sean Gourley says. (Photo: Supplied.)

BusinessMarch 27, 2019

Q&A: everything you need to know about the cyber armies coming for your democracy

Unregulated algorithms are manipulating societies, Primer AI founder Sean Gourley says. (Photo: Supplied.)
Unregulated algorithms are manipulating societies, Primer AI founder Sean Gourley says. (Photo: Supplied.)

One of the most powerful tools of the 21st century is being allowed to operate with impunity and it’s hurting humankind, according to a silicon valley-based New Zealand AI expert.

Artificial intelligence expert Sean Gourley is in the business of creating machines that can read and write.

The Kiwi is the founder and CEO of Primer AI, a Silicon Valley company employing 70 people. Primer automates white collar jobs that would otherwise be done by humans, so Gourley knows a thing or two about where future technologies will take us.

The former national decathlete is also a proud Cantabrian and the events of March 15 in his home city have him reflecting on the dark side of AI.

Back in the country to speak at AI Day, Gourley says unfettered algorithms that know us better than we know ourselves are manipulating humankind, and we ain’t seen nothing yet. Russian interference in the 2016 US elections was not particularly sophisticated compared with what the technology is capable of now.

We have air traffic control regulations to ensure planes don’t crash into each other, and yet we allow online platforms to operate with a near total absence of oversight, he says. AI is being used to propagate extremist beliefs, create divisive factions within nations and disrupt the democratic process.

There is no magic algorithm that can counter it. It will require better collaboration between humans and machines, and he says the whole process must be slowed down – if it looks at all suspicious just delay it.

The following transcript has been condensed and edited.

The issue of extremist material online is obviously top of mind following the Christchurch terrorist attack, as is the responsibility of the tech giants like Facebook to stop the spread of this kind of stuff. How do you see it?

If we point the finger at Facebook we should also point the finger very clearly at the New Zealand spy agencies. This should have been seen and stopped. You have a foreign national coming into the country, buying guns, joining gun clubs, travelling overseas, associating online with extremist groups, making a statement that they were going to go out and attack, and livestreaming the thing. What are you doing?

As you look at that from an outsider’s perspective, they just missed the boat, they should have got it. So we can go and point at Facebook, and we should, but we should also look internally and say ‘we have responsibility for tracking and monitoring people that have a high probability of conducting extremist terrorist attacks’. It’s pretty basic – anyone that’s got a gun permit that’s posting and saying they’re going to attack, have a look at their Facebook the next day. It’s not even AI.

So what’s the responsibility of the big tech companies?

I think we can all agree this kind of content shouldn’t be out there. If you’re building an algorithm that’s going to detect these kinds of behaviours without any other information about who the person is, then you’re going to run into what’s called the ‘precision recall trade-off’.

Any algorithm that makes a decision has got a measure of precision, so for everything it flags as being controversial or forbidden you’ve got to measure what percentage of that is actually correct. On the other side you’ve got recall, which is of all the things that should have been flagged what percentage did you capture? So if you think about this from Facebook’s perspective it was a recall error, something that should have been flagged wasn’t.

You could conceivably have a million people working on this and not solve this problem, and there isn’t a magical AI algorithm that will solve it, not unless we’re willing to have ourselves censored on a fairly big scale.

I think the way to do this is to slow it down, so when information comes out, instead of everything rocketing up the charts and being incredibly viral, if it looks at all suspicious just delay it. Don’t let it go out to the wider audience until it’s been really validated.

Which is where the economic dynamics of this fall in, (because) Facebook doesn’t want to lose breaking information, the virality. If it put in checks and balances it would slow things down, but it would make it safer. We need to find ways for people and algorithms to work together and the only way to do that is to give people a little more time.

The second fundamental problem is the push of people towards extremism. YouTube is a classic example of this, the algorithm has learned that if you show people ever more extremist content you will keep them engaged. No-one really programmed that. That I think is perhaps the bigger problem – what are we doing to monitor the path towards extremism that the platforms and the algorithms have figured out is a key driver of profitability? We’re being treated like products bought and sold by the algorithms that have learned to manipulate us very, very effectively.

This sounds very scary.

It is scary and you should be scared. But it’s operating without any regulation. Where are we in all of this? We’ve said ‘hey go at it, good luck’. It’d be like letting in airplanes here and saying ‘we trust you, it’s a wonderful product you’ve got there, not our job to regulate you’.

Should Facebook’s livestreaming tool be taken down following the videoing of the Christhchurch attack?

But are we going to stop Facetime? Are we going to stop you doing that on WhatsApp? Are we going to stop anyone livestreaming anything? I think the answer is no, it’s a technology and livestreaming is going to be valuable for many different things. I was watching the livestreaming of the New Zealand athletics champs on Facebook.

It goes back to slowing the system down. We can start to put checks and balances into how it’s being done, and what the requirements are. Maybe the requirement is you must meet a precision-recall target on the algorithms that you’re deploying, and you must have an established path, a 15-minute response time for a human to look at it. And if you can’t do that, well you can’t do it.

Like anything, we’ve got a whole bunch of laws and regulations around planes taking off. Where’s that? Who’s the government adviser on this, who’s Jacinda talking to? It’s exactly what a Chief Technology Officer should be doing. We don’t have one because it became a political football.

You’re a Kiwi, you’ve talked about maybe coming back and setting up a business – what responsibility or motivation do you feel to help sort some of this stuff out?

The second mosque attack happened about a mile and a half from the house where I grew up, it was around the corner from where I used to train and run as a kid. It’s home, very deeply. It’s one thing to see it in Iraq, in war zones, it’s another to see it in your home. So that certainly was one thing for me to sit up and look a little more deeply about what we can do out here. I think the challenge for us is still two things – machine learning and AI talent in this country versus what you’ve got in Silicon Valley, and then the second I think is what the market opportunity that New Zealand has and the willingness, the government, to front up and spend some money to get this stuff right.

Have you met Mark Zuckerberg? Do you have a view on whether he should have fronted up and apologised?

I’ve met him a few times. I think he’s very calculated in his statements and responses, and I think he’s made a very conscious decision not to engage. I think empathetically he should have, regardless of anything else, and I think if I was running a platform that had this kind of response I would feel a moral duty to be very engaged.

I can see pragmatically why he hasn’t, but I don’t think this is a pragmatic argument. But it doesn’t surprise me knowing how he’s responded previously to a whole bunch of political things. When he has been forced to talk he’s been less than forthcoming with all the information. But I don’t think it’s excusable.

Building machines that can read and write also sounds scary. Are machines going to take our jobs?

I think disruption to work, particularly white collar work, is kind of scary, and it’s something as a society we have to wrestle with. I don’t think we’re quite ready for the disruption that’s going to cause.

On the flip side of that, you’ve got people like financial analysts working 80-hour weeks, and we’ve got customers where they’ve got 200 analysts who wake up at 4.30 in the morning and go through things in preparation for the 6.30am call, and that’s two hours of work, pretty mechanical, fairly repetitive. It’s not why they got into finance. If you’re automating that it’s akin to saying, ‘look, we’re automating the hand tilling of fields’.

So from an economic perspective, if the technology is there you definitely want to deploy it because humans are incredibly expensive to do these roles.

From society’s perspective you’ve now got a lot of people whose jobs are disrupted or removed altogether and so what’s the response there? You’ve also got to look at it from a celebration perspective and say well, what other opportunities do we have? Now, you can celebrate that if you’ve still some sort of income, money coming in. But if the result of automation is poverty then it’s a huge problem.

So It’s not about AI, it’s about the economic systems we create to support people in a world where cognitive tasks are automatable.

Is New Zealand behind the eight-ball on AI?

You can measure this on what sort of research is coming out of New Zealand universities, and I don’t think that would scare the top 10 countries on that. In terms of investment by government and/or industry, again it’s not going to be close there. You can also look at it from the perspective of what kind of strategic plan has New Zealand got, and I haven’t seen one. You can certainly go and look at what Singapore, Estonia, China, even the US and now Australia have put together. So on these sort of vectors we’re not sitting in the world-leading position here.

And that’s a worry?

If you’re talking about a technology that has the potential to disrupt a large percentage of white collar work, and factory work as well – if you don’t have a plan as a country, that is a problem. Secondly you’re looking at the security and defence components, this is going to change the geopolitical landscape. And thirdly it’s just going to be a massive driver of economic growth, and countries that don’t have a plan on that are going to be always responding to what else is happening. So yeah, it’s overdue.

On the point about changes in the geopolitical landscape, and issues like the Cambridge Analytica scandal…

The first thing to understand on this is Cambridge Analytica was a tiny little blip in the landscape. What you’ve got is an incredibly co-ordinated and well-structured attack from the Russians through the Internet Research Agency, which is a paid group of operatives with the job of disrupting democratic countries’ elections. So this is very much at the centre of a new doctrine of warfare, it’s one of the key pillars. What that is is creating divisive factions, tribalism within countries, and having them fighting against each other as a way of disrupting democratic processes. So turns out, the infrastructure of targeting information to individuals and looking to manipulate their perspective of the world, if you put a bit of money behind that you can do that very effectively.

That was first-generation stuff in 2016, we’ve got the 2020 US presidential elections coming up, things have advanced considerably. It’s not just the Russians, you’ve got the Iranians, the Chinese, you’ve got capabilities that are now being tracked and recorded in 48 different countries that have the ability to do these kinds of attacks to a greater or lesser extent.

So we will see a lot more of it, and this will not just be people behind keyboards at the Internet Research Agency, this will be machines that are increasingly generating images, generating stories, video that are tested to target to you specifically. Why is this an issue? Well democracy requires a sense of truth, it requires a sense of civil debate, and if you’ve got extremist beliefs being propagated and accepted, and you’ve got a disbelief in truth, then democracy struggles.

You can go back to Abraham Lincoln – ‘You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time’. Well he didn’t have the ability to generate lifelike images, he didn’t have millions of bots. If you can fool all the people all the time, which we’re in a place where we can, then democracy is under threat.

What can you see happening in 2020, and is enough being done to combat it?

In 2016 we saw both Brexit and the US general elections unequivocally disrupted by foreign manipulation campaigns. Call them bots, but they were largely human-driven with some degree of automation, so not particularly sophisticated.

With AI since then we’ve been able to generate images of humans that look undetectable from other humans, we’ve been able to generate videos of known figures that can say and make you do whatever you want, text that most of can’t determine if it’s true or false, and the speed at which this is unfolding is incredibly rapid and getting more sophisticated all the time. So that’s your arsenal today.

We haven’t got particularly sophisticated counter-measures. They’re starting to come, we’re working on a bunch of them. So the attack factor has developed much faster than the defence factor. So 2020, we would expect to see some next generation AI being used as part of this attack. I think what this is going to mean is more convincing bots.

We might take that a little further and generate an image of a person who might be more appealing to the demographic. Instead of just building up random hate tweets, they can generate a set of comments over a long period of time that look incredibly convincing. So someone who looks like someone you can trust saying things that you have affinity with, and not just one but thousands of them, and you’re now exposed to that in an attempt to make you more extreme in your beliefs.

All of this exists today, and it’s just a matter of operationalising it. The technology exists to do it, and we must assume it will be done. And it’s not just countries, it’s going to be deployed by white nationalists, Islamic extremists, anyone who wants to create fear and panic, recruit, indoctrinate.

Taking it off Facebook will drive people into more extremist networks. Just solving Facebook doesn’t solve this.

We do need to invest more heavily in defence measures. We need to invest massively in journalism, and we need to educate people a lot. Estonia has been under these attacks for six, seven years and the education has been fairly good, ‘here’s how these attacks work, here’s how manipulation works’.

It’s interesting you say invest in journalism, because that’s an industry that’s been completely disrupted.

This is the more culpability – not only has Facebook gone in and propagated this information, they’ve decimated the guardians of truth and the people whose responsibility it was to inform and encourage debate, with scant regard for that sector.

If you compare what the editor of, say, the Miami Herald would have previously done, they would have sat down and made a decision about what goes on the front page. You’ve replaced that with an algorithm trying to make decisions about what information you should show. We have replaced a bunch of highly educated, trained professionals with a set of algorithms whose main job is to sell product, or maybe more accurately to sell you as the product.

We need to think about different ways to fund journalism. I for one would love to see a huge investment by the New Zealand government in journalism. Instead of waiting for Facebook to solve this, let’s put money into the journalistic community and let’s also put it into education.