AI writing tools are free, easy to use and already everywhere. But is it cheating to use them to help write an essay? Shanti Mathias spoke to New Zealand academics about AI’s place in education.
When California company Open AI released its ChatGPT tool to the public last November, social media promptly filled up with screenshots of users bantering with the state-of-the-art chatbot, with varying degrees of success. The tool can write songs, resumes, explainer articles and essays – not necessarily good writing, but certainly passable. While journalists managed their professional anxiety by getting the chatbot to write articles for us, universities combed frantically through their academic integrity policies, fearing widespread access to ChatGPT might transform learning and teaching.
Over the long summer break Aotearoa’s academic staff have been keeping a close eye on overseas responses to ChatGPT, says Catherine Moran, deputy academic vice-chancellor at the University of Canterbury. The three universities of South Australia, for instance, have all agreed that use of ChatGPT is allowed if students disclose it.
New Zealand’s eight universities are trying to align on their AI policies, although individual universities will differ. “Cheating is not new,” Moran points out, but easily accessible AI does offer new avenues to present work that isn’t your own.
Moran is quick to point out that the use of AI writing tools is only the latest chapter in a story about how learning happens. In the last 30 years, the internet has dramatically changed how student learning is assessed. Digital tools like Google Scholar and JSTOR alter what knowledge lecturers expect their students to access; using ctrl-F to find a key quote for your essay is quite a different process than rifling through paper journals in the library. Exams can be typed on a computer in a bedroom, not handwritten in an invigilated exam hall; multiple choice tests can be graded by a computer rather than laboriously checked by a tutor. And, if you don’t want to write a lab report, you can pay an online essay writer to do it for you – but if you’re found out, you’ll be taken through disciplinary procedures.
Universities seeking to clarify rules around the use of AI in assessments are reckoning with a normalisation of other digital writing tools, says Collin Bjork, a senior lecturer in communication at Massey University who writes about the intersection of technology, learning, and language. “Silicon Valley companies have a vested interest in creating a fear of students cheating,” he says. Most universities pay for their students to use Turnitin, a software that assesses bodies of text for plagiarism. “They take a student’s intellectual property into their database for perpetuity then sell it back to universities,” he says. “It’s an outrageous business model.” Turnitin is already working on software to detect the use of ChatGPT.
Students are also used to writing with the help of artificial intelligence software, even if the capabilities of ChatGPT are new. A student tired of assignments filled with corrected apostrophes might be tempted to use Grammarly with its ubiquitous advertisements all over studytube – the University of Auckland even pays for professional subscriptions for all of their staff and students. Even for people who don’t use Grammarly, tools like predictive texting, automated email responses and search autofills – which, like ChatGPT, make assumptions about language use and context based on information they’ve been trained on – all normalise the idea of writing with the help of artificial intelligence.
Given this, how radical a shift is using ChatGPT? Some people compare it to a calculator. “The invention of the calculator didn’t stop people learning mathematics,” says Alan Shaker, president of the Auckland University student association. “In the real world you can use the tools available to you and improve on your work – we need more assessments like that, not high pressure tests where there’s no room for learning.”
But Moran suggests that while a calculator is handy, tools such as ChatGPT are something different: a way for students to bypass the process of writing, which is important in itself, and not simply a means to an end. “Writing is an important part of learning – it’s how we synthesise and analyse information, express ourselves and process our thinking, whereas something like a calculator or Grammarly doesn’t do that.”
Much of the hype around ChatGPT is about this; the questions it raises about writing itself. But, while there are other AI chatbots out there, it’s also had good marketing: those perfectly screenshottable answers bracketed by the amazement of skepticism of whichever human has generated them. Playing with it is interesting, and fun.
It’s also accessible, with a simple-to-use interface that’s currently free. There are certainly other AI tools, even spookily sentient ones, but ChatGPT’s widespread release and availability means it can be used by most people with a computer, not just those with access to private servers.
That might be changing: Open AI has recently begun testing a paid version of its chatbot. That worries Bjork. “If there’s a premium version with higher quality [output] and a free version with lower quality [output], it just creates another digital divide between those who can afford it and those who can’t.”
Will AI make everyone sound like white men?
Collin Bjork wants to know how the tool will work for the many different groups of people who are students: disabled students, those who work part- or full-time while studying, people who learn remotely. “Writing itself is a tool, and a very disruptive one. In this country it was a colonising tool, to say who was literate and who was not.” It’s not clear what datasets Chat GPT has been trained on, but, as well as concerns about the tool’s bias, it also might shape writing style. “Whether it’s legal or academic writing or poetry, those disciplines have traditionally been dominated by white men,” Bjork says. “Those standards and forms will be frequent in the dataset, so I think it has a risk of making everyone sound like white men.”
It’s also worth thinking about why students violate academic integrity policies in the first place. “No student comes to university wanting to cheat,” says Shaker. “They cheat because they see no other option; modern universities position themselves as profit-making businesses, and that rubs off on student culture, so students are focused on getting out with a degree rather than learning.” Changing the cultural and economic conditions that encourage a transactional approach to education is complex, but in the meantime, Shaker wants to work with the University of Auckland to make sure that student support services, including academic and financial support, are easier for those in need to access.
Beyond concerns about students using ChatGPT or other AI tools to create work that isn’t their own lie deeper conundrums about what kinds of learning and assessment universities offer at all. If AI can perform the same basic synthesis that university students are assessed on, then what does the value has the learning they’re being assessed on provided?
Again, structures and cultures at universities may have to be challenged. If universities are worried about students using AI to circumvent learning, Bjork suggests encouraging lecturers to develop unique and specific assessments – something more creative than a prompt that can be answered by an AI. That requires resourcing. Bjork notes that many university professionals around Aotearoa are still negotiating for better pay after strikes last year. “If universities want lecturers to be innovative to work with tools like this, give them the time and space and money to do so,” he says.
While developing academic integrity policies that take into account the possible use of AI, Moran, too, says that ChatGPT invites universities to reimagine assessment. “ChatGPT can dig through articles but it doesn’t really change our thinking,” she says. “Humans have real world experience and knowledge and depth – learning that is about growing and changing.”
Make no mistake, it’s everywhere
Regardless of what happens in universities, the internet is already full of AI writing, designed to talk to the algorithms of searching; even those who don’t use AI writing tools directly might find their ways of thinking and learning shaped by their use. CNET might be upfront about its use of such tools, but people who create filler content to boost websites in search engines might not. It could be bad news for copywriters.
Microsoft has a significant investment in Open AI, so these tools might become embedded in ubiquitous Microsoft products like Word and Teams; meanwhile, Google and Apple are likely to release some of their own AI tools this year to stay competitive. Whether or not universities limit use of AI tools, an inundation of AI writing will impact how information is accessed and what style of writing information is written in.
Furthermore, ChatGPT is hardly the end of developing artificial intelligence that is widely accessible by the general public. In experiments with the tool, Bjork has found that it doesn’t learn from information he gave it in a back and forth conversation, and that the dataset doesn’t know anything that has happened past 2021. He can imagine using AI available now to brainstorm ideas, or to generate first drafts. But AI tools will develop and improve, and each new iteration could continue to challenge ideas about what makes human learning unique when the same skills can be simulated by electricity racing through transistors.
The role of AI in universities and beyond is clearly a crucial ongoing debate. I ask Moran what universities can do to preserve learning as AI becomes more widespread and powerful. “You don’t learn in isolation: you learn in communities,” she says. “We have to build in interaction so that learning is more than just reading things.”
For now, though, while ChatGPT may write like a person, it cannot truly learn in the way a human can, building connections from an idiosyncratic and unique combination of knowledge, opinion, and experience. It can only draw conclusions from what seems most likely, based on its analysis of its database – which was written by humans. “It’s easy to anthropomorphise [ChatGPT],” says Bjork. But the AI isn’t a person. “It has no way of accounting for the truth: it just spits out words and doesn’t know what they mean.”