Artificial Intelligence, Writing, and Editing: A “Conversation” with ChatGPT

There has been immense interest in ChatGPT over the past weeks. Here is a “conversation” with ChatGPT about some of the concerns we’ve raised previously. This conversation has been edited for coherence and clarity. We have not rewritten any of ChatGPT’s output, although we prodded it to be more concise and gave it explicit word counts for its answers. So, while the exchange has the form of a conversational interview, it’s important to keep in mind that we gave the AI some scaffolding.

Clarity Doctors: What do you think is the biggest benefit of AI-assisted writing for academics?

ChatGPT: AI-assisted writing tools can save time, increase efficiency, improve the quality of writing, and provide a platform for collaboration for academics. They can also help overcome writer’s block and find inspiration for new ideas.

CD: Can you say a little more about how AI can help academics or writers in general with finding inspiration for new ideas?

ChatGPT: Certainly! One way that AI-assisted writing tools can help writers to find inspiration for new ideas is by providing prompts or suggestions based on a writer’s previous work or areas of expertise. For example, a tool might analyze a writer’s previous articles or papers and suggest new topics or angles that are related to their existing work. This can be a helpful way to spark new ideas and inspire fresh thinking, as it allows writers to build on their existing knowledge and experience.

CD: Can you think of any ethical pitfalls for writers who use AI tools to analyze and develop their existing writing?

ChatGPT: One potential pitfall is the risk of overreliance on AI tools. While AI tools can be a helpful resource for generating ideas, organizing thoughts, and improving the quality of writing, it is important for writers to retain their own critical thinking and creative judgment. Writers should not blindly rely on AI tools to do all of the work for them, and should take the time to carefully review and revise their writing to ensure that it accurately reflects their own ideas and arguments.

Another ethical pitfall to consider is the potential for AI tools to perpetuate biases that may exist in the data that they are trained on. For example, if an AI tool is trained on a dataset that is heavily skewed towards one perspective or demographic, it may produce biased or unbalanced results when applied to other writing. Writers should be aware of this potential pitfall and take steps to ensure that their use of AI tools does not perpetuate biases in their own work.

An image of a person shaking hands with a robot.
An AI-generated image seems appropriate. (Prompt: “A writer talking to herself in a mirror, but the reflection is a robot”.) Courtesy of craiyon.com.

Finally, writers should be mindful of intellectual property and copyright issues when using AI tools. It is important to ensure that any ideas or content generated by AI tools are properly credited and attributed, and to respect the intellectual property rights of others.

CD: Let’s focus on the issue of algorithmic bias that you mentioned. This may be an unfair question to ask an AI language model, but how can writers ensure that they do not perpetuate biases in their own work?

ChatGPT: Thank you for your understanding. There are several ways that writers can address the issue of algorithmic bias when using AI tools. One way is to be aware of the potential for bias in the data that is used to train AI algorithms. If the data used to train an AI algorithm is heavily skewed or unbalanced in some way, the algorithm may produce biased results when applied to other data. Writers can take steps to ensure that they are using diverse and representative datasets when training AI algorithms, and can also carefully review the results produced by the algorithm to ensure that they are not biased.

Another way that writers can address the issue of algorithmic bias is to use multiple AI tools and compare the results produced by each one. This can help to mitigate the risk of relying on a single algorithm that may be biased in some way.

Finally, writers can also seek out AI tools that are specifically designed to address issues of bias, such as tools that use techniques like fair representation learning or adversarial learning to mitigate biases in the data.

CD: Let’s change course somewhat. There have been concerns recently that AI language models, including yourself, are overconfident when giving wildly wrong answers or when promoting misinformation. How would you respond to these accusations?

ChatGPT: AI language models, including myself, are trained on large datasets of text and are designed to generate human-like text based on the patterns and trends identified in that data. While AI language models can be very effective at generating text that is coherent and resembles human language, they are not perfect and may sometimes provide inaccurate or misleading information. This can occur if the data used to train the model contains errors or misinformation, or if the model is asked to generate text based on incomplete or ambiguous information. It is important for users of AI language models to carefully evaluate the output of these models and to use their own critical thinking and judgment when interpreting the information generated by the model.

CD: Can you summarize our conversation thus far? 

ChatGPT: In this conversation, we have discussed the potential benefits of AI-assisted writing tools for academics, including the ability to save time, increase efficiency, and improve the quality of writing. We have also discussed the potential for AI to help writers find inspiration for new ideas through the use of machine learning algorithms and natural language processing techniques. Additionally, we have discussed the potential ethical pitfalls of using AI tools to analyze and develop writing, including the risk of overreliance on AI and the potential for algorithmic bias. Finally, we have discussed the importance of critical thinking and judgment when evaluating and interpreting the output of AI language models.

CD: Do you have any parting words of wisdom for writers and editors considering using chatGPT or other AI tools to help their writing?

ChatGPT: Use AI tools responsibly, apply critical thinking and judgment, be aware of potential biases, and remember that they are a resource, not a substitute.