Skip to Main Content

Artificial Intelligence (AI)

Best pratices in the use of artificial intelligence tools for research

Artificial Intelligence (AI) and AI-assisted technologies are relatively new but are developing quickly. With the release of ChatGPT in November 2022, the capabilities of generative AI to create fluent text was introduced to the public in general. ChatGPT's capabilities were upgraded in March 2023 with the introduction of ChatGPT-4. Generative AI has also been incorporated into Microsoft's Bing search engine, and Google has released its own version, called Bard. Generative AI chatbots have shown a remarkable ability to produce coherent and grammatically sound text in response to user queries. ChatGPT is the most well-known of these, but Google and Microsoft have launched their own examples with Bard and Bing AI (respectively). 

It is important to understand that when we talk about “AI” in this context, we are not talking about actual understanding or intelligence, but rather a computer science field that aims to automate complex tasks that were previously only accomplishable through human intelligence. This discipline specifically falls into a section of Machine Learning called Deep Learning, which utilizes a neural network model to create Large Language Models (LLMs). These LLMS draw from huge sets of data and calculate what type of word and piece of text are most likely to come next in a response, resulting in natural seeming text. A technical report from OpenAI is available at https://doi.org/10.48550/arXiv.2303.08774, while an excellent, if lengthy, description about how this works is available at Stephen Wolfram’s blog (https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/). 

Illinois Tech's Policy on the Use of AI

Students are advised to ask their professors if the use of AI tools is allowed for specific classes.

Publisher Policies on the Use of AI

Publisher policy on the use of generative AI is constantly changing and varies widely form one publisher to the next or even one journal to the next. Researchers interested in publishing work aided by generative AI should consult with their publisher regarding the use of those tools. As a service to researchers, we offer the following list of the policies from some of the major publishers.

Caveats and Cautions about Generative AI

The propensity for these chatbots to generate fictional information is well known, and this is often called “hallucinations.” This requires any information generated to be checked against authoritative sources before use. Additionally, NewsGuard’s Red Team exercises have found to ChatGPT and Bard to be exceptionally adept at reproducing misinformation and prominent false narratives, and neither has shown significant improvement despite user feedback and developer public commitments to action in these areas. There’s no particular discussion to be had as to whether this meets the technical definition of plagiarism - AI content generators are simply unable to demonstrate the transparency of their information sources or even compliance with copyright infringement requirements. Furthermore, ChatGPT in particular seems to be unable to generate any connection to real sources, quickly generating very real-looking citations and website URLs that are usually (~80% of the time in this author’s investigations) completely fabricated. A notable exception is when Kishony and Ifargan were evidently able to code a piece of software to feed correct citations into ChatGPT to generate a paper after being given false ones, as well as correct other errors (Conroy: doi.org/10.1038/d41586-023-02218-z). When prompted for scholarly articles, Bard currently refuses the assignment, while Bing AI does refer to some actual journal articles, as well as the Wikipedia entry on the topic. There's a good article asking "What academic research is ChatGPT accessing?" that's viewable here: (https://www.linkedin.com/pulse/what-academic-research-chatgpt-accessing-danny-kingsley). It seems likely that abstracts of paywalled journals are being used, and there's a good provocation here suggesting that "misinformation is free and the truth is paywalled." In addition to concerns about accuracy and plagiarism, Culp notes that Italy has blocked ChatGPT due to privacy concerns, and suggests a real risk of scientific stagnation if use of such models become widespread (Culp: doi.org/10.46374/volxxv_issue2_Culp).

Citing ChatGPT or Other AI Sources

All references should provide clear and accurate information for each source and should identify where they have been used in your work. However, content generated by AI is nonrecoverable; it cannot be retrieved later or linked to, so it cannot be cited in the same way as a book or journal article.

Currently, there are no specific guidelines for citing AI generated content. Many sources recommend citing this content using the reference style for personal communication or correspondence, while others recommend the style for computer software.

We recommend asking your professor or publisher for specific guidance.