ChatGPT writes research paper on academic integrity and AI
An academic journal article exploring how to ensure academic integrity in the age of ChatGPT was largely written by the generative artificial intelligence tool.
The paper, published on March 13 in the journal Innovations and Teaching International by academics at Plymouth Marjon University and the University of Plymouth in England, looks and reads much like any other academic journal article. But in the discussion section, the paper’s three human authors reveal their secret.
“As the alert reader may already have guessed, everything up to this point in the paper was written directly by ChatGPT, with the exception of the sub-headings and references,” the authors wrote. “Our intent in taking this approach is to highlight how sophisticated Large Language Machines (LLMs) such as ChatGPT have become.”
The text produced by ChatGPT is “quite formulaic” and students attempting to get the AI tool to write their college essays would likely generate very similar material that would be easy for a human to detect, the paper suggests. Additionally, plagiarism detection companies such as Turnitin are already working on tools to detect AI-authored text.
Regardless of how good AI-generated text detection might be, the power of ChatGPT to generate text that could pass as written by an academic “should serve as a wake-up call to university staff to think very carefully about the design of their assessments and ways to ensure that academic dishonesty is clearly explained to students and minimized,” the authors wrote.