Photo: Getty Images; additional design by Tina Tiller
Photo: Getty Images; additional design by Tina Tiller

OPINIONSocietyMay 25, 2022

As Labour again fails to address the housing crisis, our most vulnerable suffer

Photo: Getty Images; additional design by Tina Tiller
Photo: Getty Images; additional design by Tina Tiller

Budget 2022: The three housing decisions in this year’s budget illustrate the deep malaise of this government’s housing policy, argues Alan Johnson.

There were three notable features in budget 2022 for housing. KiwiBuild appears to have been quietly laid to rest, yet more money is to be spent on transitional housing and the provision of social housing may grow by 3-4%. Together these signal the extent of the government’s housing ambitions for the remainder of this parliamentary term and possibly for the rest of its tenure. If the government changes in 2023 then this is probably as good as it gets.

It appears the KiwiBuild programme will end on June 30, 2022 and its remaining budgets allocated to the land-for-housing programme*. Budget documents vaguely describe this programme as the “expenditure incurred in the facilitation, acquisition, and development of land and residential properties”, which can of course mean anything to do with housing.

Budget 2022 proposes that the pool of income-related rent-subsidised tenancies is expanded during 2022/23 from 72,500 to 75,500. Such an expansion should be welcomed although it needs to be seen in the context of a public housing waiting list that at the end of 2021 had reached almost 26,000 households and that continues to grow. 

The response to this growing waiting list has been more and more transitional housing. At their worst this is make-do accommodation in run-down motel units, although purpose-built housing has also been developed. The number of transitional housing places almost doubled between December 2018 and December 2021 to 5,100 while their cost has grown from $148 million in 2018/19 to an expected $466 million in 2022/23.

These three decisions illustrate well the deep malaise of housing policy and the government’s inability to make a difference to New Zealand’s chronic shortage of affordable housing. 

The demise of KiwiBuild, for example, illustrates a lack of realism both by Labour politicians and the officials who advise them. The vision of building 100,000 affordable houses in 10 years was laudable but lacked both an understanding of how difficult this would be and a sense of what needed to change to make it happen.

The extent to which the government is bereft of any wider vision for housing is well illustrated by its response to the chronic shortage of social housing in Rotorua. Almost nowhere else in New Zealand is the under supply of public rental housing so extreme as in Rotorua. At the end of 2021 there were 973 households on the local public housing waiting list, while the stock of already occupied public rental housing was just 788 dwellings. In response to this shortage, budget 2022 allocated $130 million over the next four years to provide transition housing support to 200 families in Rotorua. It apparently has never occurred to the government that it could just go out and build these houses.

Such short-sightedness borders on an ideological blindness with an unwillingness by Labour’s leadership to truly appreciate the extent of the role the state must play in order to address the housing crisis. 

In April 2021, housing minister Megan Woods acknowledged that the “days of thinking the state can be a passive bystander and the market will provide, I think, are over – I think we’ve seen that there is market failure, the market hasn’t delivered”. Yet a year later, as she announced a $1.4 billion spend to fund infrastructure rebuilds in Auckland, she was happy with the idea that most of the Crown land affected was to be sold off to developers for market housing. Of the 16,000 new dwellings planned for this programme, just 6,000 will be public rental housing owned by Kāinga Ora while 4,000 existing Kāinga Ora houses are being demolished in the process.

But it gets worse. Budget 2022 provided $188 million to pay for the sale of state-owned land and dwellings and a further $65 million in costs from the deferred settlements of these sales.

Since the 1950s state housing has been seen in residual terms, as a reluctant acknowledgement by successive governments that many citizens will not be able to be housed adequately by the private market. There has been no bigger vision for the state’s involvement in housing than to provide housing for some of the poorest and most vulnerable New Zealanders. There is no vision of what the state can do to reshape housing markets as a means of addressing inequality and poverty. There has been no ambition that governments can move beyond a mindset where almost everything is financed, planned and built by the private sector.

To give them their due, the Labour government has made its first serious commitment in almost 50 years to addressing the neglect of public housing. If they get a chance beyond the 2023 election to do so, they need to continue on with a much bolder vision, far bigger budgets and a confidence that a society where everyone has access to decent, affordable housing is actually achievable.

Alan Johnson is convenor of the Child Poverty Action Group and its housing spokesperson.


*In a written response to this piece, a spokesperson for housing minister Megan Woods said the KiwiBuild buy-off-the-plans programme was still running, but how it’s accounted for in appropriations has changed:

“The KiwiBuild Housing appropriation was established in October 2018 for a four-and-three-quarter year period ended in June 2022. This appropriation has been utilised to fund both buy-off-the-plans activity (KiwiBuild) and land-for-housing programme purchases. As part of Budget 2022, to provide more transparency to the individual activities, separate appropriations have been established for both the buy-off-the-plans and the land-for-housing programmes.

“As part of the finalisation of the appropriation at 30 June 2022, ministers have agreed that unutilised funds will be transferred into the buy-off-the-plans programme and the land-for-housing appropriation as in-principle expense transfers (which are a business-as-usual process governed by Treasury rules and processes).

“Like all in-principle expense transfers, these will be reflected in the supplementary estimates for 30 June 2023.”

Keep going!
Image: Archi Banal
Image: Archi Banal

TechMay 23, 2022

AI writing has entered a new dimension, and it’s going to change education

Image: Archi Banal
Image: Archi Banal

What happens when robots not only learn to write well, but the tech becomes easily accessible and cheap? As Hal Crawford explains, it’ll likely be teachers who feel the effects first.

There are two schools of thought when it comes to artificial intelligence: there are the people who have heard of the GPT-3 language model, and then there are those who have heard about it, gone to the OpenAI site, created a guest login and tried it out for themselves.

The first group contains people who are wondering what the big deal is.

The second group does not. I haven’t heard of anyone who’s actually used GPT-3 and doesn’t think AI is going to change the world profoundly.

Education in particular is going to feel its influence immediately. That’s because although GPT-3 has been around for a couple of years, the ability for anyone to create a guest login and begin using the machine is recent. This is not some techno future. If I had an essay on a tough topic, or a particularly dense piece of writing I couldn’t easily understand, I would now take it to GPT-3.

In fact, I already did. It’s always hard to come up with intros, so I asked GPT-3 to compose this feature for me. I typed in my draft headline – AI language models have entered a new dimension, and it’s going to change education (and almost everything else) forever – and it burped up 474 perfectly readable words in just over a second:

AI language models have entered a new dimension, and it’s going to change education (and almost everything else) forever. If you haven’t heard of GPT-3 yet, it’s a new AI language model developed by OpenAI. It’s the world’s largest model of its kind and can accurately predict the next word in a sentence. GPT-3 is not just a better version of GPT-2; it’s an entirely new way of doing AI. (…)

What is this thing?

This is me writing again. The human. As the AI says, GPT-3 (the name stands for Generative Pre-trained Transformer) was built by Silicon Valley’s OpenAI in 2019. The machine is a vast network of nodes – a structure that mimics aspects of the brain – that can be trained by playing a game with itself, billions of times over. The game is to guess the missing word in a sentence. One part of the AI guesses, another looks at the answer and adjusts the network’s pathways depending on whether the answer was right or wrong. Where do the sentences come from? Books, articles, essays, stories, conversations, social media posts. Billions and billions of lovely words, waiting to be fed into the machine. Something that couldn’t have happened without the internet.

Educational technology researcher Mike Sharples, of the UK’s The Open University, says transformers like GPT-3 are set to disrupt education. To make the point, Professor Sharples asked the AI to produce an essay on “learning styles”. It came back with 401 words that had all the look and feel of a competent undergraduate with the confidence to make this assertion:

Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience. 

The essay was too short to be a finished product, and something else came to light when Sharples checked the footnoted reference the AI had included. It was fake. Almost real, but not quite.

“When I first tested GPT-3 it was to generate short stories,” Sharples told me. “I was surprised by how coherent and plausible they are. Then I tried generating student essays and academic papers and was shocked that the program made up fake references, invented academic studies, and fabricated ‘facts’. That’s just the way they work.”

Teaching process, not output

Sharples says the rise of transformers means teachers will have to change the way they teach.

“As educators, if we are setting students assignments that can be answered by AI, are we really helping students learn?”

He continued: “I think the immediate reaction from teachers, and policy makers, will be, ‘we can’t change anything’. Then to look for countermeasures, such tools to check for AI-generated essays. These are unlikely to be successful (apart from obvious ones such as checking citations and references).”

New Zealand education technology expert Stephen Marshall, from Victoria University of Wellington, agrees that AIs will require a big mental shift for many teachers. Professor Marshall, like Sharples, is basically an optimist when it comes to the impact AI is going to have on education. But this is optimism with a big caveat: it’s going to require an awful lot of work from teachers and institutions. Teaching that looks at a completed product only – an essay for example – is finished.

“[Contract cheating, whether by human or AI] is seen as a real problem because of a model of assessment that is based on looking at a product of work rather than paying attention to the processes that lead to the generation of that product. And this is where the pathway for responding to the use of artificial intelligence is going to help us pedagogically.”

Stephen Marshall (left) and Mike Sharples (Photos supplied)

“I think artificial intelligence … has the capacity to really change a fundamental dynamic. Education historically is predicated on the fact that you can’t actually change the quality of somebody’s brain,” Marshall says. “The nice thing about artificial intelligence is its capacity to augment people’s cognition in ways that are productive and helpful.”

Marshall points to the helpful ability of AI transformers to summarise difficult texts as an example. My experience of this backs him up. I was reading the “handbook” of the ancient Greek philosopher Epictetus, and encountered a particularly dense paragraph. Where normally I would just nut it out, sentence by sentence, instead I dropped it into GPT-3. I’ve reprinted the result further below. Its clarity blew my mind.

Both professors have great insights, and I can’t do them justice in this piece. Such are the limitations of my particular natural intelligence (NI) system. I have included the full text of my email conversation with Sharples below, and both he and Marshall will appear on my podcast in coming weeks.

Why didn’t you use the GPT-3 version of this article?

Before I went with “two schools of thought” as my introduction, I had actually considered the “if you haven’t heard” line that GPT-3 wrote for me. I rejected it, probably for the same reason the AI used it: too conventional.

There is a more serious problem with the intro. Read it again.

“GPT-3 is not just a better version of GPT-2; it’s an entirely new way of doing AI.”

That’s a pretty standard kind of sentence structure, and it has a “truthy” feel about it. It’s plausible. The problem is that it’s wrong. GPT-3 is not an entirely new way of doing AI, it’s an iteration of the approach used in GPT-2, with a massively bigger data-set and better treatment of niche subjects.

As Sharples found, the AI doesn’t know or care about truth, even about itself. It is naturally amoral and writes by feel, like a bad marketing department.

It’s a pretty serious flaw, but it won’t stop the march of the GPTs. When the AI is writing copy better than many journalists file, you know you’re living in the future.

Detail from an engraving for Edward Ivie’s Latin translation of Epictetus’ Enchiridon, printed in Oxford in 1715. (image: Wikimedia Commons)

GPT-3 summarises Epictetus

Input: 

When anyone shows himself overly confident in ability to understand and interpret the works of Chrysippus, say to yourself, “Unless Chrysippus had written obscurely, this person would have had no subject for his vanity. But what do I desire? To understand nature and follow her. I ask, then, who interprets her, and, finding Chrysippus does, I have recourse to him. I don’t understand his writings. I seek, therefore, one to interpret them.” So far there is nothing to value myself upon. And when I find an interpreter, what remains is to make use of his instructions. This alone is the valuable thing. But, if I admire nothing but merely the interpretation, what do I become more than a grammarian instead of a philosopher? Except, indeed, that instead of Homer I interpret Chrysippus. When anyone, therefore, desires me to read Chrysippus to him, I rather blush when I cannot show my actions agreeable and consonant to his discourse.

Output:

If someone is being really confident about understanding something that you don’t understand, don’t worry about it. Just remember that you’re trying to understand nature, and if someone else can help you do that, then that’s a good thing. Don’t get too caught up in worrying about what you don’t understand.

Mike Sharples on AI and language: an email conversation

Hal Crawford: Mike I am very interested in your upcoming book Story Machines. My feeling is that machines being able to invent compelling stories will be a much bigger deal than is widely contemplated at the moment, because humans are very susceptible to stories (regardless of the mind that made them). Is that how you see it?

Mike Sharples: Yes, in Story Machines we discuss how humans are wired for stories. As humans, we construct our perception of the world through narrative – at the neural level by assembling sense impressions into a continuous flow of experience, at the cognitive level by understanding the world though narrative, and at the social level by seeking out stories that fit our view of the world. We are already attuned to machines that tell plausible stories. For example, few readers of Bloomberg News will know that about a third of the content from Bloomberg News is generated by computer .

Are we close to a Transformer or other AI being able to piece together a formulaic novel?

The first novel written in collaboration with an AI was published in 1993. Scott French programmed an AI “expert system” to imitate the style of Jacqueline Susann’s novel The Valley of the Dolls. That project took him eight years. An AI Transformer program such as GPT-3 can now mimic any style of writing in seconds. These programs are already being used by some would-be novelists as aids to story writing. But they have two fundamental limitations – they have a low “attention span” of around 500 words, and they have no ability to reflect on what they have written, to judge whether it makes sense and whether it forms a tellable story.

Future transformer programs will have larger attention spans. But to write a formulaic novel, they may need to be combined with symbolic AI systems that can generate plots and revise their products. Story generators such as MEXICA developed by my colleague Rafael Pérez y Pérez show how that could be done.

You are someone with a history in education (we all are! But seriously, I believe you have thought and know a lot about teaching). A kneejerk reaction to GPT-3 and its kin would be to panic. Students have the ability now to generate readable essays at will through GPT-3. Instead of freaking out, how can teachers deal with it?

I think the immediate reaction from teachers, and policy makers, will be, “we can’t change anything”. Then to look for countermeasures, such tools to check for AI-generated essays. These are unlikely to be successful (apart from obvious ones such as checking citations and references), since essays generated by GPT-3 are original and grammatical, and the surface text looks plausible. To detect fake essays will require software that can dig down into the structure, meaning and coherence of the text. Any new AI that is able to detect these deeper elements will also be able to generate them – leading to an AI “arms race” between generators and detectors.

Then, the reaction may be to give invigilated exams. But a few institutions may be prompted to rethink the purpose of assessment and consider how students can learn alongside AI. For example, teachers could generate a set of AI “essays” on a topic, then set students (individually or in groups) to critique them and write their own better version.

As educators, if we are setting students assignments that can be answered by AI, are we really helping students learn?  There are many better ways to assess for learning, such as constructive feedback, peer assessment, teachback.

What does composing an essay do for the mind? If that becomes easier, what does a student lose?

For centuries, students have been set academic essays as a means for them to construct a coherent argument, then compose it clearly and accurately in fluent prose.

Current tools, such as spell and style checkers, help students write more clearly and accurately, but at the expense of fluency – calling up the tools interrupts the flow. AI generators such as GPT-3 can help students become more fluent, giving them different ways to express ideas and to continue the flow. But they are no help in making the essay more accurate or logical. If AI generators make composing easier and let students focus more on structure and argument, that may be to the good. But the danger is that students just let the AI take over writing, churning out plausible nonsense.

To me an interesting analogy is the ability to photograph something. Even though we can do that, many people still draw. Is that what writing will become?

To push that analogy, some people take photos for pleasure, some people enjoy drawing and painting. But to be a good architect or graphic designer means mastering the art and craft of creating images. That means understanding, scale, structure, perspective, rendering, colour mixing and much more. Computer tools can greatly assist these professions, but they don’t replace them (yet?). Similarly for writing, some people will write for pleasure, some will have fun with AI generators to increase fluency and take their writing in new directions, some will interact with AI-generated stories through computer games and interactive movies. But to be a good journalist, academic, policy maker or business leader means being able to craft clear and accurate prose to inform or persuade. That’s beyond current AI. Future AI tools may do as well or better than humans at informing, arguing and persuading. That’s when writing will fundamentally change.

In your LSE blog post you note that GPT-3, while making a passable essay, has actually made up a reference and doesn’t really get to the core of the issue or say anything really interesting. Has that been your experience with GPT-3?

Yes. When I first tested GPT-3 it was to generate short stories. I was surprised by how coherent and plausible they are. Then I tried generating student essays and academic papers and was shocked that the program made up fake references, invented academic studies, and fabricated “facts”. That’s just they way they work.

An AI transformer text generator is a hugely souped-up sentence completer. It doesn’t know or care whether what is writes is honest, decent, accurate. Companies such as OpenAI are adding filters to check for bad language, but that doesn’t solve the fundamental issue – that current neural network generators can’t reflect and don’t care. In the future, hybrid symbolic and neural network systems will be developed that can generate well-structured arguments, check their output for accuracy and include relevant examples and references. That will require a step-change in AI development.

To me, GPT-3’s output smacks of Jorge Borges and his made-up references and inventions. What do you think?

Yes, Borges imagined a Library of Babel containing every possible book, including ones of fake facts and invented events. Of course, such a library would be totally useless to the reader. Whether GPT-3 will be useless or valuable depends not on what it produces but how it is employed by human writers. As Borges wrote: “A book is more than a verbal structure or series of verbal structures; it is the dialogue it establishes with its reader and the intonation it imposes upon his voice and the changing and durable images it leaves in his memory. A book is not an isolated being: it is a relationship, an axis of innumerable relationships.” Such is also true for AI-generated texts.

Could I put to you the following thought: that educators will be sorely challenged by Transformers because we have so often focused on the form rather than the substance of essays and stories. That students have learned by mimicry, just as GPT-3 has learned by mimicry. Is that fair?

Yes, I agree. AI Transformers that churn out blogs, student essays and short stories for entertainment may simply confirm that we are obsessed with form over content. But, they may provoke us to think about what AI can’t (yet) do, such as express deep ideas, human emotions and transcendent experience.  We may become inured to plausible nonsense. Or we may explore new realms of fiction that combine human and computer experience.

Mike Sharples has a book coming out soon called Story Machines: How Computers Have Become Creative Writers, published by Routledge on 5 July. 

Stephen Marshall published Shaping The University of the Future in 2018. I recommend you read it if you are interested in the future of education.