Artificial Intelligence, Writing, and Editing (Part 6): Mediocre Computing and Excellent Writing

The discourse around large language models (LLMs) continues to pick up steam, and shows no signs of slowing down. Just this week, we have the launch of GPT-4, which will mean we all have to update our sense of what LLMs can do yet again!

There’s a week-by-week overview of AI developments here. These takes about LLMs and literary fiction are interesting and probably right. This book will help you worry more precisely about the near-future of AI technology. There is too much to read, but we aren’t slowing down!

This week, we’re being optimistic. We think that once things settle, LLMs will push writers towards excellence because there will be no other natural home for human writing. In other words, LLMs will supplant mediocre human writing of all genres, and this will probably be good. Mediocre writers will have little reason to exist, so human writers should aim for excellence—perhaps with the help of editors.

Let’s dive in!

LLMs and Mediocre Computing

The advent of LLMs inugurated what Venkatesh Rao calls mediocre computing. Whereas previous impressive AI breakthroughs like Deep Blue or AlphaGo were examples of excellent computing, LLMs are impressive because they are mediocre.

Mediocre computing is computing that aims for parity with mediocre human performance in a realish domains where notions of excellence are ill-posed.

Excellent computing is computing that aims to surpass the best-performing humans in stylized, closed-world domains where notions of excellence are well-posed.

AlphaGo and Deep Blue reached excellent performance in stylized domains. LLMs have reached mediocre performance in realist domains.

Realish domains are enmeshed with the real world’s messiness and complexity. What makes them realish rather than real is that some of this complexity is codified and simplified. The urban road system is realish whereas off-road driving is a more real domain. We live most of our lives in realish domains, so an AI challenge to our performance here hits harder.

Skills learned in one realish domain transfer and leak into other realish domains. Competent language use by LLMs requires coping with the many realish domains that our language tries to model. Language is an open-ended system that can be adapted to model any realish or real domain, and so the impact of LLMs is both impressive and open-ended. Which is why there’s so much discourse about them.

LLMs Tend to Mediocrity

The LLM training process has been covered in many places. Put briefly, LLMs see huge amounts of example data and using vast computational power extract patterns in the data at all sorts of scales. LLMs end up having models of the many ways language is used. With these models, the LLM determines the likely continuation of any given string of words (really “tokens”) in a way that’s responsive to the meaning of the words.

THe LLM spiderweb continues to expand!

LLMs are trained on all the language that’s fit to print, post, or otherwise appear on the internet. This means that most of what the LLM sees is language use in its compete mediocrity. For every hard-hitting, well-crafted poem, LLMs see thousands of sloppily written poems. For every piece of excellent prose, it sees millions of chat logs, forum posts, pointless arguments, and self-indulgent purple prose. It’s statically certain that the LLM defaults to mimicking these patterns of language use.

Prompt Refinement

To be sure, people are getting better at tweaking prompts to be specific, to give the LLM context, to help it stay in a particular style, and many more desirable qualities. For example, the LLM “knows” about Milton’s Paradise Lost, so you can get it to write in that style. Note that getting good at prompt engineering is a human skill. A beautiful piece of LLM-generated prose will, we think, take as much human skill to extract from the LLM as a comparable piece of human-written prose. We leave it to writers to figure out how and when LLM use will benefit them. Notice that this mode of LLM use doesn’t trigger the usual fears that the machines will supplant us. Instead, our labour mixes with their power. Using LLMs will be akin to programming, except programming that people trained in the humanities will have some advantages in.

Prompt refinement is good both for mediocre and excellent writing. For most mediocre uses of writing, there practically no downside to getting an LLM to do it. Nobody will miss spending extra time tying out mediocre language. For excellent uses of language, the intrinsic worth of the project will guide us; LLMs can enter into the workflow as assistants in many ways, some of which we’ve explored here.

So, we think writers should be cautiously optimistic. For writers aim at excellence, we think LLMs will help with structure. Writers will become more editor-like, and we editors will continue being editors.