Despite the prevalence of hype, large language models (LLMs) are genuinely impressive. They performa as well as humans for some genres of writing. Given enough context and well-conceived prompting, they can measure up for more complex writing tasks. For spitting out grammatically impeccable text quickly, they’ve already left us humans in the dust.
We are considering developing a workshop on writing in the age of LLMs. Our goal would be to help with: (1) worrying about LLMS more efficiently, and (2) making good use of what the technology affords. It’s a big project. If you’d like to hear about it once it’s done, drop us a line. We’re also open to collaboration, since LLM technology affects all of us who make or improve words for a living.
In the meantime, here are some early thoughts in this direction.
How not to Give your Writing Away
We think the deep danger in using LLMs is succumbing to the temptation to give our writing away to the LLM.
This idea is drawn from academic writing, where authors sometimes make arguments that lean too heavily on the existing literature. These papers tend to read as a bit all over the place, tending to follow the breadcrumb trails of someone else’s thinking. The paper might be well-sourced and cite a great deal of literature, but it somehow doesn’t add up to an argument. The paper feels more like reporting others’ work rather than contributing.
We all do this from time to time, especially in early drafts. This tendency comes from an overall lack of confidence, unclear goals, or being overwhelmed. When we are not confident, we cling to anything that seems to have that confidence, which is usually the sources we’re reading to get up to speed. Similarly, when overwhelmed we tend to fixate on details (like citations or things we’ve read) as a way of finding some order in the chaos.
If we as writers (1) are feeling confident, (2) have a clear plan for what we want to say, and (3) are not overwhelmed, we’ll probably not give our writing away to the LLM. However, if any of (1)-(3) are missing, we need to check ourselves, otherwise we’re likely to mistake an LLM’s natural writing confidence for something more than it is. We are in no position to lead in the dance if we’re compromised.
LLM Use will Depend on Delegation Skills
Even if we’re in a good headspace, LLMs might lead us astray if we don’t know how to delegate work.
If we think of LLMs as writing assistants, we can import all the wisdom about using assistants properly. Good assistant use is good delegation. We need to have some sense of the assistant’s capabilities. We need to have a clear vision of the overall goal which the assistant might not need to have. Given that clear vision, we can break the goal down into smaller tasks, some of which we can delegate to the assistant. To the extent that we’re good delegators, we’re likely to find LLMs useful supporting characters.
Bad delegation introduces errors and confusion where there were none. And what do we expect when we give a big task to an assistant who’s not ready for it? We need to remember that LLMs, despite their polished presentation, are not all-knowing and all-competent. Just like some humans.
The Writer as AI Manager
Depending on the writing task, one might use several different AI assistants. We think that in the near-to-medium future writing will become a little more managerial under the influence of LLMs. We shudder at this possibility, but this seems to be the trend for the next 5-20 years.
To prepare for this, we all need to attend more deeply to the high-level structural aspects of writing. The writer of the future might manage a team of AI copyeditors, proofreaders, layout editors, and even developmental editors as part of their toolkit. In this context, the writer’s deepest responsibility will be, as always, clarity of overall vision. In a way, not much will change about why we write, but the methods of writing 20 years from now might look very different.