At current levels of development, generative AI excels at creating bland, inoffensive, formulaic types of documents—things like memos, summaries, abstracts, business letters, and the like. Below, we list several examples of using ChatGPT to generate these types of documents.
One of the greatest drawbacks to using generative AI tools for writing is their tendency to hallucinate or "make things up." Hallucination is inherent in the language models used and in the way AI tools are programmed to respond. When a generative AI tool is unable to find an answer to your question, the programming still required it to provide an answer. It does this by combining what text to create what it considers to be a "plausible" answer. This is considered by AI developers as a form of creativity that replicates human behavior. Most computer users, however, consider that computers and AI can't lie and are completely trustworthy. the problem is compounded by the fact that the AI tools present their hallucinations as "fact" and don't provide any disclaimers that the results are speculative.
Another serious issue with using generative AI tools for writing is that they freely engage in plagiarism. The very nature of the large language models (LLMs) used by generative AI is to copy relevant text from various sources at random and combine those snippets into a coherent whole. Copying and reusing text is basically the definition of plagiarism. When using AI, you have no idea where the text is coming from, sou can't easily cite the sources. Worse, you have no idea if the sources are even appropriate or legitimate. It could be plagiarizing from highly biased source, bizarre fringe websites. For university-level writing, you could be inadvertently copying from high school or middle- school sources.
Researchers should NEVER submit the key parts of their research or manuscripts for analysis to any of the AI tools for any reason. By doing so, the text of your manuscript becomes part of the dataset used by the AI and thus becomes publicly accessible and can be considered published. This can jeopardize your ability to publish the manuscript in most academic journals, as the manuscript would have already been published. It can also prevent you from being able to patent any inventions discussed in your manuscript because the information shared with the AI can be considered prior art of disclosure by the patent office.
Generative AI tools like ChatGPT should NOT be used for writing academic papers. They are plagued by "hallucinations" or the tendency to make things up: such as generating false statements, creating fictitious citations, and plagiarizing from unidentified and often low quality sources. This does not mean that generative AI is useless in academic writing. It can be used for:
In all these cases, generative AI should be used with care. AI algorithms cannot truly understand the passages they're asked to review or summarize and rely instead on comparing the text to other text samples or iteratively replacing words and rearranging text—a process known as "patch writing." Without true understanding of the original text, generative AI can introduce misinterpretations and other errors or, if used in research papers or assignments, open the author to charges of plagiarism. A close reading of the results is essential to ensure the AI did not misrepresent the original text or replicate the original too closely.
Most proposals, whether for grant funding or for business, have many standardized or formulaic sections. In grant proposals, this includes things like investigator biographies. Using AI for these sections can save time and effort, allowing more time to be spent on describing the actual project, its implications, and benefits.
Generative AI tools are well suited to writing the more formulaic types of business documents. It should be avoided in applications like marketing materials or other creative types of content.
This guide is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.