Question:
Now that I’m on the tenure-track, I’m trying to submit to high-impact journals that I’ve never published in before. On my most recent revise-and-resubmit, the peer reviewer told me that I didn’t write like a native English speaker. Which is true, because I’m not one! What can I do to sound like a native English speaker when I’m not? To sound more academic?
– Anonymous, Economics
Dr. Editor’s response:
It is infuriating that peer reviewers are policing your writing when they should focus on the quality of your research. Few academics have expertise in writing, and yet they respond to the idioms of non-native English speakers as if those idioms are errors – “a waste of money, time and morale, especially when there is nothing to correct” (Herrera 1999). Of course, research needs to be communicated clearly and effectively – indeed, it’s my job to help academics be understood by their peers – but being a native English speaker is not the same as being able to write well. Most of your reviewers will be average writers who don’t know just how average their writing is.
If you’re submitting to a new journal, I’d encourage you to read a number of articles it has published, to familiarize yourself with the style and conventions of that publication. You can also use a resource like writingwellishard.com to compare the characteristics of your own writing to those of articles published by that journal; I designed that site specifically for academics, and I created a free three-minute video as well as a 13-page PDF to help you make the most out of that tool.
But both familiarizing yourself with the language used in a specific journal and dedicating time to analyzing the patterns in your writing are resource-intensive activities for which you might not currently have the time or cognitive capacity. And – it’s the age of AI! There must be ways to get the robots help. To see if there any worthwhile shortcuts, and to learn more about sounding like an economist specifically, I spoke with my colleague Wes Cowley, an academic editor who specializes in computer science, economics and related fields.
1. ChatGPT won’t save you
If you ask ChatGPT to “elevate” your language or make you sound like an academic, you risk making your text worse, not better, Wes warned me. Said Wes:
From a practical point of view: ChatGPT and other large language models (LLMs) don’t understand the meaning of the text they’re working on. Instead, an LLM produces words based on probabilities of what words follow others it’s already seen or generated in the conversation. So, when asked to make your writing more formal or academic-sounding, the LLM will simply replace your words with others that are near synonyms, but that often don’t mean exactly the same thing. (Dr. Editor’s ’s note: an article on Wes’s site goes into more detail about this issue.)
I’ve run my own informal tests with ChatGPT. It changed a phrase in one abstract from “influence the behaviour of banks” to “disrupt the operations of financial institutions.” The first phrase sounds like an economics paper. The second, more like fintech marketing material or perhaps “Occupy Wall Street”.
Because we can’t access the corpus on which ChatGPT is trained or the statistical model it builds from that corpus, we don’t know what exactly it uses as its standard of formal or high-quality writing, and so each person who uses it has to critically assess the meaning of the rephrases it generates.
But even if ChatGPT doesn’t change your meaning – and even if that’s something you can be certain about – Wes worries about that large, unknown corpus:
Accidental plagiarism is also a concern. Because LLMs are trained on large bodies of text scraped from the web, and they generate their output based on that text, there is the chance that the text it produces will substantially resemble, or even replicate, other published work. And it may generate text for other users that is very similar to what it generated for you. Before submitting any text that was generated by these tools, be sure to run that text through plagiarism detectors.
Finally, the copyright status of AI-generated text is still being defined. In the U.S., AI-generated text cannot currently be protected by copyright. How that applies to text that is “elevated” by an LLM is not well defined.
Wes’s concerns about plagiarism and copyright are concerns that I echo, and not just for moral and legal reasons: one of the datasets used to train generative AI contains “multiple works of aliens-built-the-pyramids pseudo-history by Erich von Däniken” (Reisner 2023). It’s one thing to accidentally plagiarize a copyrighted work; it’s another, substantially more embarrassing thing to accidentally plagiarize a bad one.
2. Draft out loud
To help your manuscript flow well in English, consider experimenting with drafting out loud. We all speak differently from how we write, and it may be that you’re able to vocalize your ideas better – in closer alignment to your peer reviewers’ expectations – when you draft with your voice.
Microsoft Word and Google Docs both have speech-to-text options, as do many phone apps. Other software that will transcribe for you include the AI-powered Otter.ai and Descript. Both of those tools have free plans with a set number of transcription minutes, and I’ve read on social media that they are able to understand diverse English-language accents:
I’ve been using Otter AI for a while now and it’s generally an enormous improvement on manual transcribing but can be a bit iffy with accents. About 85% for SE UK, 75% for Australian, 90% west coast US. I’ve found its sweet spot: well educated Chinese English. About 97% so far!
— Ms D 🌈 (@msdwrites) June 8, 2022
Descript. It is very very good. It really has improved and is now almost perfect with Scottish and Irish accent which it used to struggle with. So it certainly speeds me up in terms of transcription, but I’m obsessive, especially with long reads, and pore over text for detail etc
— NeilMackay (@NeilMackay) May 31, 2023
Whichever transcription tool you choose, you can draft your manuscript as if you were talking about your work to a colleague. And because you’re just drafting, you don’t need to worry too much about mistakes at this stage. Just give yourself a cue like “backing up” or “oops, that’s wrong,” and then start talking again. You can make cuts to your transcript later.
This is an approach that I love for the narrative portions of a manuscript: your opening and closing paragraphs; your big, key take-aways; the places in which you’re sharing your analysis and interpretation rather than reporting results or describing the math behind your models. Drafting out loud isn’t the best approach for everyone, but it’s worth experimenting with, especially for the parts of your manuscript in which you need to describe, synthesize, or discuss – and these are the parts where language, clarity, and the author’s voice stand out the most.
3. Edit your transcript
If you’ve drafted with your voice, you’ll need to clean up the words your tool has transcribed inaccurately, and then ensure that the sentences and paragraphs break in the right place. Then, you’ll want read for logic – does your transcript say what you wanted to convey? Can you integrate it smoothly with any parts you’ve written with your fingers, rather than your voice?
Wes and I always recommend sending your most important manuscripts to an editor, but we make that suggestion whether or not you’re a native English speaker. Editors know how to revise for structure and style, helping you to ensure that your most important ideas take the spotlight, and that your hard-laboured-over research can be understood, cited, funded, or acted on.
If you have start-up funds or grant money, you can use it to hire an economics specialist like Wes. If you don’t have access to either of those pools, you might consider investing your professional development funds in a course like my own “Becoming a Better Editor of Your Own Work.” And anyone can of course access my free resources and Wes’s own articles, or you can send me a question for a future column.