“Just as the calculator got us thinking about what we want to accomplish with numeracy skills, artificial intelligence that can produce text offers a similar opportunity,” write Mauritz Kelchtermans and Massimiliano Simons.
No, this opinion piece was not written by an algorithm. Not yet. But we are heading towards a future where this demand becomes the norm. The hype around new developments such as GPT-3 and now recently ChatGPT, developed by OpenAI, is hard to ignore these days. GPT-3 is an artificial intelligence (AI) algorithm that can generate natural language, such as text or speech. It can miraculously produce (mostly) coherent texts in response to simple questions.
The results of GPT-3 are indeed amazing. The algorithm responds to every question or instruction in the blink of an eye and even in Dutch. Do you want a novel about a cat to Mars? No problem. An essay with three arguments why the soccer world cup should be ethical after all? No problem. A Shakespearean sonnet on the corona pandemic? Box. An opinion piece on the effects of GPT-3 on our society? Check (we went to see anyway).
People everywhere are experimenting with GPT-3. For example, there are already researchers who have sent an article entirely written by GPT-3 explaining why GPT-3 will have an impact on the writing of academic articles in a journal (yes, we are not that original).
GPT-3 also appears to have impressive predictive power. What’s in it for you? You can already see the result of GPT-3’s predecessors at Google, Gmail or LinkedIn: the app often already suggests search terms or answers you can give to an email or personal message. This will become even more the rule in the future.
However, there are also things about this new technology that worry us. First, you never know if what the algorithm produces is correct, even remotely. Like most AI algorithms, GPT-3 and soon GPT-4 are trained on horribly large data sets of past data. So if something has been repeated often enough, the algorithm will repeat it, even if it’s wrong.
(Read more below the article.)
For example, GPT-3 sometimes reproduces long-disproved urban myths, such as the fact that you are only allowed to swim for half an hour after eating or that a person eats an average of seven spiders while sleeping (both are not true). Or, for example, present the algorithm with the following riddle: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the balloon cost? GPT-3, like most people, will give the wrong answer, which costs 10 cents a ball (correct answer is 5 cents).
Moreover, any large unoptimized language model reproduces racist and gender biases, as these are also embedded in these datasets. If you let GPT-3 refer to a CEO, it will refer to a male, if a housekeeper refers to a female. In this sense, GPT-3 does not focus on the correct answer, but on how things were going in the past. Whether this past is correct or ethical is fundamentally irrelevant.
So the predictive powers of GPT-3 actually amount to an extrapolation to the future based on the past. At most, the old will have a new look. It’s a bit like Hollywood movies today: a machine that runs partly on nostalgia, repetition and remakes (which, oddly enough, fail quite often).
Why is copywriting so easy to automate? The answer is conflicting: much of the writing is mechanical and predictable in itself. Lewis Mumford said it more than 100 years ago: to replace the worker with a machine, you must first mechanize the work. GPT-3 can easily write journal articles or student essays because these texts already follow predictable rules.
Some may see this as good news. Hooray, algorithms can take over boring tasks. Only moments of inspiration and renewal are left to man. But creativity and innovation often require repetitive upstream work. Consider the Roman division between translating, imitating and surpassing (translation, impersonation and emulation). The idea was that a good artist, and by extension any other seeker, can only achieve mastery by repeating and imitating first, then creating. Machine and repetitive actions are necessary for true creativity.
(Read more below the article.)
A lack of a good foundation is not only an obstacle to outstanding performance, but also to its recognition. Blind trust in GPT-3 can lead to loss of knowledge and expertise. When only AI writes articles or software code, who can check if the content is correct?
The above shows that human choices are hidden behind GPT-3. GPT-3 can only produce something from a question posed by a human, which often already contains the creative combination (for example: “Write a summary of the Belgian coalition agreement in the style of Lize Spit “). Asking good questions requires familiarity with the tool and knowledge of the end product.
This was well illustrated in an opinion piece in The Guardian as of 2020, written by GPT-3. In reality, it was quite misleading. After all, the editors admit that the theme was imposed on GPT-3, and that they combined the best bits from eight different releases, because the answers to questions posed to GPT-3 are generally poorly reproducible. The next time you come across an impressive example of GPT-3, think about it: how many failed, boring, or nonsensical attempts preceded it before humans picked the most interesting one?
In this sense, GPT-3 will not replace writers and programmers. After all, he is not in a position to make independent choices about the purpose of the writing and whether the final product succeeds in achieving that goal as well. It is the student who must make the choice to have his essay written by GPT-3; and it is the teacher who ultimately judges whether GPT-3 has been successful.
So the real problem is not that people are being left behind. Humans continue to make choices. What can be a problem Who who makes original choices. Writing your own opinion piece is easy. But reshaping the implicit choices from which to boot a system like GPT-3 is trickier. This requires a lot of training data and IT time, which amounts to a total cost of around $10-20 million, plus a few cents for each interaction with ChatGPT, for example. Quickly designing a non-discriminatory alternative to GPT-3 is not for everyone. So here too we are gradually abandoning ourselves to Big Tech.
The optimist will answer the above problems: GPT-3 can help you exactly if you are not creative enough or don’t know what question to ask. Use GPT-3 as a dialog partner to solve this problem. It’s a valid objection that points to a realistic future scenario of GPT-3/4: he won’t replace the writer, but will work with him.
We have seen something analogous with the emergence of chess computers or the impressive performance of AlphaGo. We didn’t suddenly stop playing chess or go there en masse. On the contrary, there are now even more players than before. As long as people can get pleasure, money or reputation from it, the writing will also not disappear normally.
Moreover, the fascinating is often in the equation. Just as AlphaGo can make surprising moves in Go, GPT-3/4 can enrich our own writing. For example, we know the story of a senior executive who traded his inexperienced colleagues during a brainstorming session for a more fruitful conversation with GPT-3.
Yet teachers are concerned about GPT-3: what does homework even mean if the algorithm can do all the work? Plagiarism could well become the rule. Still, GPT-3 shouldn’t be the end of the homework. For example, we may require GPT-3 and its successors to hide an identifier in their texts. This identifier does not need to be visible to humans, only to anti-plagiarism software.
But perhaps the most obvious solution is that GPT-3 should make us think about what kind of writing we really want from our children and students. Perhaps the repetitive trials should give way to alternate, creative exercises or computer-locking classroom assignments.
The history of the calculator gives this same message. This, too, once caused an uproar in the classroom, with accompanying panic about the end of math. In reality, mathematics has not disappeared, but has changed. Just as the calculator got us thinking about what we want to accomplish with math skills, GPT-3 provides a similar opportunity. Not to automate the world, but to rethink what writing can mean in our society and how machines can strengthen us in this, without replacing us.
Massimiliano Simons is professor of philosophy at Maastricht University and co-founder of the Working Group on the Philosophy of Technology (WGPT) at KU Leuven.
Mauritz Kelchtermans is preparing a doctorate on the ethics of AI at KU Leuven and is co-founder of the Working Group on Philosophy of Technology (WGPT) at KU Leuven.